00:00:00.001 Started by upstream project "autotest-per-patch" build number 132528 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.818 The recommended git tool is: git 00:00:00.818 using credential 00000000-0000-0000-0000-000000000002 00:00:00.820 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.832 Fetching changes from the remote Git repository 00:00:00.836 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.848 Using shallow fetch with depth 1 00:00:00.848 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.848 > git --version # timeout=10 00:00:00.861 > git --version # 'git version 2.39.2' 00:00:00.861 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.873 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.873 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.144 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.158 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.176 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.176 > git config core.sparsecheckout # timeout=10 00:00:06.188 > git read-tree -mu HEAD # timeout=10 00:00:06.208 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.234 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.234 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.352 [Pipeline] Start of Pipeline 00:00:06.367 [Pipeline] library 00:00:06.369 Loading library shm_lib@master 00:00:06.369 Library shm_lib@master is cached. Copying from home. 00:00:06.388 [Pipeline] node 00:00:21.390 Still waiting to schedule task 00:00:21.391 Waiting for next available executor on ‘vagrant-vm-host’ 00:21:57.022 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:21:57.024 [Pipeline] { 00:21:57.035 [Pipeline] catchError 00:21:57.036 [Pipeline] { 00:21:57.052 [Pipeline] wrap 00:21:57.062 [Pipeline] { 00:21:57.071 [Pipeline] stage 00:21:57.074 [Pipeline] { (Prologue) 00:21:57.093 [Pipeline] echo 00:21:57.095 Node: VM-host-WFP7 00:21:57.102 [Pipeline] cleanWs 00:21:57.111 [WS-CLEANUP] Deleting project workspace... 00:21:57.111 [WS-CLEANUP] Deferred wipeout is used... 00:21:57.117 [WS-CLEANUP] done 00:21:57.320 [Pipeline] setCustomBuildProperty 00:21:57.416 [Pipeline] httpRequest 00:21:57.833 [Pipeline] echo 00:21:57.835 Sorcerer 10.211.164.101 is alive 00:21:57.845 [Pipeline] retry 00:21:57.848 [Pipeline] { 00:21:57.862 [Pipeline] httpRequest 00:21:57.866 HttpMethod: GET 00:21:57.867 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:57.868 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:57.869 Response Code: HTTP/1.1 200 OK 00:21:57.870 Success: Status code 200 is in the accepted range: 200,404 00:21:57.870 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:58.016 [Pipeline] } 00:21:58.035 [Pipeline] // retry 00:21:58.044 [Pipeline] sh 00:21:58.328 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:58.344 [Pipeline] httpRequest 00:21:58.758 [Pipeline] echo 00:21:58.760 Sorcerer 10.211.164.101 is alive 00:21:58.771 [Pipeline] retry 00:21:58.773 [Pipeline] { 00:21:58.788 [Pipeline] httpRequest 00:21:58.792 HttpMethod: GET 00:21:58.793 URL: http://10.211.164.101/packages/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:21:58.793 Sending request to url: http://10.211.164.101/packages/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:21:58.794 Response Code: HTTP/1.1 200 OK 00:21:58.795 Success: Status code 200 is in the accepted range: 200,404 00:21:58.796 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:22:01.068 [Pipeline] } 00:22:01.091 [Pipeline] // retry 00:22:01.099 [Pipeline] sh 00:22:01.383 + tar --no-same-owner -xf spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:22:04.693 [Pipeline] sh 00:22:04.971 + git -C spdk log --oneline -n5 00:22:04.971 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:22:04.971 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:22:04.971 e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:22:04.971 27c6508ea bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:22:04.971 c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:22:04.992 [Pipeline] writeFile 00:22:05.010 [Pipeline] sh 00:22:05.291 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:22:05.303 [Pipeline] sh 00:22:05.584 + cat autorun-spdk.conf 00:22:05.584 SPDK_RUN_FUNCTIONAL_TEST=1 00:22:05.584 SPDK_TEST_NVME=1 00:22:05.584 SPDK_TEST_FTL=1 00:22:05.584 SPDK_TEST_ISAL=1 00:22:05.584 SPDK_RUN_ASAN=1 00:22:05.584 SPDK_RUN_UBSAN=1 00:22:05.584 SPDK_TEST_XNVME=1 00:22:05.584 SPDK_TEST_NVME_FDP=1 00:22:05.584 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:05.592 RUN_NIGHTLY=0 00:22:05.594 [Pipeline] } 00:22:05.608 [Pipeline] // stage 00:22:05.625 [Pipeline] stage 00:22:05.627 [Pipeline] { (Run VM) 00:22:05.641 [Pipeline] sh 00:22:05.923 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:22:05.923 + echo 'Start stage prepare_nvme.sh' 00:22:05.923 Start stage prepare_nvme.sh 00:22:05.923 + [[ -n 4 ]] 00:22:05.923 + disk_prefix=ex4 00:22:05.923 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:22:05.923 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:22:05.923 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:22:05.923 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:05.923 ++ SPDK_TEST_NVME=1 00:22:05.923 ++ SPDK_TEST_FTL=1 00:22:05.923 ++ SPDK_TEST_ISAL=1 00:22:05.923 ++ SPDK_RUN_ASAN=1 00:22:05.923 ++ SPDK_RUN_UBSAN=1 00:22:05.923 ++ SPDK_TEST_XNVME=1 00:22:05.923 ++ SPDK_TEST_NVME_FDP=1 00:22:05.923 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:05.923 ++ RUN_NIGHTLY=0 00:22:05.923 + cd /var/jenkins/workspace/nvme-vg-autotest 00:22:05.923 + nvme_files=() 00:22:05.923 + declare -A nvme_files 00:22:05.923 + backend_dir=/var/lib/libvirt/images/backends 00:22:05.923 + nvme_files['nvme.img']=5G 00:22:05.923 + nvme_files['nvme-cmb.img']=5G 00:22:05.923 + nvme_files['nvme-multi0.img']=4G 00:22:05.923 + nvme_files['nvme-multi1.img']=4G 00:22:05.923 + nvme_files['nvme-multi2.img']=4G 00:22:05.923 + nvme_files['nvme-openstack.img']=8G 00:22:05.923 + nvme_files['nvme-zns.img']=5G 00:22:05.923 + (( SPDK_TEST_NVME_PMR == 1 )) 00:22:05.923 + (( SPDK_TEST_FTL == 1 )) 00:22:05.923 + nvme_files["nvme-ftl.img"]=6G 00:22:05.923 + (( SPDK_TEST_NVME_FDP == 1 )) 00:22:05.923 + nvme_files["nvme-fdp.img"]=1G 00:22:05.923 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:22:05.923 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:22:05.923 + for nvme in "${!nvme_files[@]}" 00:22:05.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:22:06.862 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:22:06.862 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:22:06.862 + echo 'End stage prepare_nvme.sh' 00:22:06.862 End stage prepare_nvme.sh 00:22:06.874 [Pipeline] sh 00:22:07.156 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:22:07.156 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:22:07.156 00:22:07.157 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:22:07.157 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:22:07.157 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:22:07.157 HELP=0 00:22:07.157 DRY_RUN=0 00:22:07.157 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:22:07.157 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:22:07.157 NVME_AUTO_CREATE=0 00:22:07.157 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:22:07.157 NVME_CMB=,,,, 00:22:07.157 NVME_PMR=,,,, 00:22:07.157 NVME_ZNS=,,,, 00:22:07.157 NVME_MS=true,,,, 00:22:07.157 NVME_FDP=,,,on, 00:22:07.157 SPDK_VAGRANT_DISTRO=fedora39 00:22:07.157 SPDK_VAGRANT_VMCPU=10 00:22:07.157 SPDK_VAGRANT_VMRAM=12288 00:22:07.157 SPDK_VAGRANT_PROVIDER=libvirt 00:22:07.157 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:22:07.157 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:22:07.157 SPDK_OPENSTACK_NETWORK=0 00:22:07.157 VAGRANT_PACKAGE_BOX=0 00:22:07.157 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:22:07.157 FORCE_DISTRO=true 00:22:07.157 VAGRANT_BOX_VERSION= 00:22:07.157 EXTRA_VAGRANTFILES= 00:22:07.157 NIC_MODEL=virtio 00:22:07.157 00:22:07.157 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:22:07.157 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:22:10.474 Bringing machine 'default' up with 'libvirt' provider... 00:22:11.126 ==> default: Creating image (snapshot of base box volume). 00:22:11.693 ==> default: Creating domain with the following settings... 00:22:11.693 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732641528_327e85d97b6360e54bb9 00:22:11.693 ==> default: -- Domain type: kvm 00:22:11.693 ==> default: -- Cpus: 10 00:22:11.693 ==> default: -- Feature: acpi 00:22:11.693 ==> default: -- Feature: apic 00:22:11.693 ==> default: -- Feature: pae 00:22:11.693 ==> default: -- Memory: 12288M 00:22:11.693 ==> default: -- Memory Backing: hugepages: 00:22:11.693 ==> default: -- Management MAC: 00:22:11.693 ==> default: -- Loader: 00:22:11.693 ==> default: -- Nvram: 00:22:11.693 ==> default: -- Base box: spdk/fedora39 00:22:11.693 ==> default: -- Storage pool: default 00:22:11.693 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732641528_327e85d97b6360e54bb9.img (20G) 00:22:11.693 ==> default: -- Volume Cache: default 00:22:11.693 ==> default: -- Kernel: 00:22:11.693 ==> default: -- Initrd: 00:22:11.693 ==> default: -- Graphics Type: vnc 00:22:11.693 ==> default: -- Graphics Port: -1 00:22:11.693 ==> default: -- Graphics IP: 127.0.0.1 00:22:11.693 ==> default: -- Graphics Password: Not defined 00:22:11.693 ==> default: -- Video Type: cirrus 00:22:11.693 ==> default: -- Video VRAM: 9216 00:22:11.693 ==> default: -- Sound Type: 00:22:11.693 ==> default: -- Keymap: en-us 00:22:11.693 ==> default: -- TPM Path: 00:22:11.693 ==> default: -- INPUT: type=mouse, bus=ps2 00:22:11.693 ==> default: -- Command line args: 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:22:11.693 ==> default: -> value=-drive, 00:22:11.693 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:22:11.693 ==> default: -> value=-drive, 00:22:11.693 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:22:11.693 ==> default: -> value=-drive, 00:22:11.693 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:11.693 ==> default: -> value=-drive, 00:22:11.693 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:11.693 ==> default: -> value=-drive, 00:22:11.693 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:22:11.693 ==> default: -> value=-drive, 00:22:11.693 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:22:11.693 ==> default: -> value=-device, 00:22:11.693 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:11.693 ==> default: Creating shared folders metadata... 00:22:11.693 ==> default: Starting domain. 00:22:13.593 ==> default: Waiting for domain to get an IP address... 00:22:28.637 ==> default: Waiting for SSH to become available... 00:22:30.016 ==> default: Configuring and enabling network interfaces... 00:22:35.378 default: SSH address: 192.168.121.48:22 00:22:35.378 default: SSH username: vagrant 00:22:35.378 default: SSH auth method: private key 00:22:37.279 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:22:45.403 ==> default: Mounting SSHFS shared folder... 00:22:47.940 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:22:47.940 ==> default: Checking Mount.. 00:22:48.878 ==> default: Folder Successfully Mounted! 00:22:48.878 ==> default: Running provisioner: file... 00:22:50.281 default: ~/.gitconfig => .gitconfig 00:22:50.540 00:22:50.540 SUCCESS! 00:22:50.540 00:22:50.540 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:22:50.540 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:22:50.540 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:22:50.540 00:22:50.550 [Pipeline] } 00:22:50.568 [Pipeline] // stage 00:22:50.579 [Pipeline] dir 00:22:50.579 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:22:50.582 [Pipeline] { 00:22:50.596 [Pipeline] catchError 00:22:50.599 [Pipeline] { 00:22:50.611 [Pipeline] sh 00:22:50.895 + vagrant ssh-config --host vagrant 00:22:50.895 + sed -ne /^Host/,$p 00:22:50.895 + tee ssh_conf 00:22:54.185 Host vagrant 00:22:54.185 HostName 192.168.121.48 00:22:54.185 User vagrant 00:22:54.185 Port 22 00:22:54.185 UserKnownHostsFile /dev/null 00:22:54.185 StrictHostKeyChecking no 00:22:54.185 PasswordAuthentication no 00:22:54.185 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:22:54.185 IdentitiesOnly yes 00:22:54.185 LogLevel FATAL 00:22:54.185 ForwardAgent yes 00:22:54.185 ForwardX11 yes 00:22:54.185 00:22:54.198 [Pipeline] withEnv 00:22:54.200 [Pipeline] { 00:22:54.213 [Pipeline] sh 00:22:54.504 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:22:54.504 source /etc/os-release 00:22:54.504 [[ -e /image.version ]] && img=$(< /image.version) 00:22:54.504 # Minimal, systemd-like check. 00:22:54.504 if [[ -e /.dockerenv ]]; then 00:22:54.504 # Clear garbage from the node's name: 00:22:54.504 # agt-er_autotest_547-896 -> autotest_547-896 00:22:54.504 # $HOSTNAME is the actual container id 00:22:54.504 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:22:54.504 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:22:54.504 # We can assume this is a mount from a host where container is running, 00:22:54.504 # so fetch its hostname to easily identify the target swarm worker. 00:22:54.504 container="$(< /etc/hostname) ($agent)" 00:22:54.504 else 00:22:54.504 # Fallback 00:22:54.504 container=$agent 00:22:54.504 fi 00:22:54.504 fi 00:22:54.504 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:22:54.504 00:22:54.786 [Pipeline] } 00:22:54.803 [Pipeline] // withEnv 00:22:54.812 [Pipeline] setCustomBuildProperty 00:22:54.828 [Pipeline] stage 00:22:54.830 [Pipeline] { (Tests) 00:22:54.847 [Pipeline] sh 00:22:55.129 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:22:55.403 [Pipeline] sh 00:22:55.684 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:22:55.959 [Pipeline] timeout 00:22:55.959 Timeout set to expire in 50 min 00:22:55.961 [Pipeline] { 00:22:55.977 [Pipeline] sh 00:22:56.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:22:56.836 HEAD is now at f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:22:56.850 [Pipeline] sh 00:22:57.132 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:22:57.409 [Pipeline] sh 00:22:57.698 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:22:58.065 [Pipeline] sh 00:22:58.348 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:22:58.607 ++ readlink -f spdk_repo 00:22:58.607 + DIR_ROOT=/home/vagrant/spdk_repo 00:22:58.607 + [[ -n /home/vagrant/spdk_repo ]] 00:22:58.607 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:22:58.607 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:22:58.607 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:22:58.607 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:22:58.607 + [[ -d /home/vagrant/spdk_repo/output ]] 00:22:58.607 + [[ nvme-vg-autotest == pkgdep-* ]] 00:22:58.607 + cd /home/vagrant/spdk_repo 00:22:58.607 + source /etc/os-release 00:22:58.607 ++ NAME='Fedora Linux' 00:22:58.607 ++ VERSION='39 (Cloud Edition)' 00:22:58.607 ++ ID=fedora 00:22:58.607 ++ VERSION_ID=39 00:22:58.607 ++ VERSION_CODENAME= 00:22:58.607 ++ PLATFORM_ID=platform:f39 00:22:58.607 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:22:58.607 ++ ANSI_COLOR='0;38;2;60;110;180' 00:22:58.607 ++ LOGO=fedora-logo-icon 00:22:58.607 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:22:58.607 ++ HOME_URL=https://fedoraproject.org/ 00:22:58.607 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:22:58.607 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:22:58.607 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:22:58.607 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:22:58.607 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:22:58.607 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:22:58.607 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:22:58.607 ++ SUPPORT_END=2024-11-12 00:22:58.607 ++ VARIANT='Cloud Edition' 00:22:58.607 ++ VARIANT_ID=cloud 00:22:58.607 + uname -a 00:22:58.607 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:22:58.607 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:59.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:59.434 Hugepages 00:22:59.434 node hugesize free / total 00:22:59.434 node0 1048576kB 0 / 0 00:22:59.434 node0 2048kB 0 / 0 00:22:59.434 00:22:59.434 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:59.434 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:59.434 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:59.434 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:22:59.434 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:22:59.434 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:22:59.434 + rm -f /tmp/spdk-ld-path 00:22:59.434 + source autorun-spdk.conf 00:22:59.434 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:59.434 ++ SPDK_TEST_NVME=1 00:22:59.434 ++ SPDK_TEST_FTL=1 00:22:59.434 ++ SPDK_TEST_ISAL=1 00:22:59.434 ++ SPDK_RUN_ASAN=1 00:22:59.434 ++ SPDK_RUN_UBSAN=1 00:22:59.434 ++ SPDK_TEST_XNVME=1 00:22:59.434 ++ SPDK_TEST_NVME_FDP=1 00:22:59.434 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:59.434 ++ RUN_NIGHTLY=0 00:22:59.434 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:22:59.434 + [[ -n '' ]] 00:22:59.434 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:22:59.693 + for M in /var/spdk/build-*-manifest.txt 00:22:59.693 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:22:59.693 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:59.693 + for M in /var/spdk/build-*-manifest.txt 00:22:59.693 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:22:59.693 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:59.693 + for M in /var/spdk/build-*-manifest.txt 00:22:59.693 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:22:59.693 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:59.693 ++ uname 00:22:59.693 + [[ Linux == \L\i\n\u\x ]] 00:22:59.693 + sudo dmesg -T 00:22:59.693 + sudo dmesg --clear 00:22:59.693 + dmesg_pid=5461 00:22:59.693 + sudo dmesg -Tw 00:22:59.693 + [[ Fedora Linux == FreeBSD ]] 00:22:59.693 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:59.693 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:59.693 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:22:59.693 + [[ -x /usr/src/fio-static/fio ]] 00:22:59.693 + export FIO_BIN=/usr/src/fio-static/fio 00:22:59.693 + FIO_BIN=/usr/src/fio-static/fio 00:22:59.693 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:22:59.693 + [[ ! -v VFIO_QEMU_BIN ]] 00:22:59.693 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:22:59.693 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:59.693 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:59.693 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:22:59.693 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:59.693 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:59.693 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:59.693 17:19:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:22:59.693 17:19:37 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:59.693 17:19:37 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:22:59.693 17:19:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:22:59.693 17:19:37 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:59.952 17:19:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:22:59.952 17:19:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:59.952 17:19:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:59.952 17:19:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:59.952 17:19:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.952 17:19:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.952 17:19:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.952 17:19:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.952 17:19:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.952 17:19:37 -- paths/export.sh@5 -- $ export PATH 00:22:59.953 17:19:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.953 17:19:37 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:59.953 17:19:37 -- common/autobuild_common.sh@493 -- $ date +%s 00:22:59.953 17:19:37 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732641577.XXXXXX 00:22:59.953 17:19:37 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732641577.CGATmI 00:22:59.953 17:19:37 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:22:59.953 17:19:37 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:22:59.953 17:19:37 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:59.953 17:19:37 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:59.953 17:19:37 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:59.953 17:19:37 -- common/autobuild_common.sh@509 -- $ get_config_params 00:22:59.953 17:19:37 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:22:59.953 17:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:22:59.953 17:19:37 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:22:59.953 17:19:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:22:59.953 17:19:37 -- pm/common@17 -- $ local monitor 00:22:59.953 17:19:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:59.953 17:19:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:59.953 17:19:37 -- pm/common@25 -- $ sleep 1 00:22:59.953 17:19:37 -- pm/common@21 -- $ date +%s 00:22:59.953 17:19:37 -- pm/common@21 -- $ date +%s 00:22:59.953 17:19:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641577 00:22:59.953 17:19:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641577 00:22:59.953 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641577_collect-cpu-load.pm.log 00:22:59.953 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641577_collect-vmstat.pm.log 00:23:00.889 17:19:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:23:00.889 17:19:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:23:00.889 17:19:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:23:00.889 17:19:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:00.889 17:19:38 -- spdk/autobuild.sh@16 -- $ date -u 00:23:00.889 Tue Nov 26 05:19:38 PM UTC 2024 00:23:00.889 17:19:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:23:00.889 v25.01-pre-268-gf7ce15267 00:23:00.889 17:19:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:23:00.889 17:19:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:23:00.889 17:19:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:23:00.889 17:19:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:23:00.889 17:19:38 -- common/autotest_common.sh@10 -- $ set +x 00:23:00.889 ************************************ 00:23:00.889 START TEST asan 00:23:00.889 ************************************ 00:23:00.889 using asan 00:23:00.889 17:19:38 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:23:00.889 00:23:00.889 real 0m0.001s 00:23:00.889 user 0m0.000s 00:23:00.889 sys 0m0.000s 00:23:00.889 17:19:38 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:23:00.889 17:19:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:23:00.889 ************************************ 00:23:00.889 END TEST asan 00:23:00.889 ************************************ 00:23:01.147 17:19:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:23:01.147 17:19:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:23:01.147 17:19:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:23:01.147 17:19:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:23:01.147 17:19:38 -- common/autotest_common.sh@10 -- $ set +x 00:23:01.147 ************************************ 00:23:01.147 START TEST ubsan 00:23:01.147 ************************************ 00:23:01.147 using ubsan 00:23:01.147 17:19:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:23:01.147 00:23:01.147 real 0m0.000s 00:23:01.147 user 0m0.000s 00:23:01.147 sys 0m0.000s 00:23:01.147 17:19:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:23:01.147 17:19:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:23:01.147 ************************************ 00:23:01.147 END TEST ubsan 00:23:01.147 ************************************ 00:23:01.147 17:19:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:23:01.147 17:19:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:23:01.147 17:19:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:23:01.147 17:19:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:23:01.147 17:19:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:23:01.147 17:19:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:23:01.147 17:19:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:23:01.147 17:19:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:23:01.147 17:19:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:23:01.147 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:01.147 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:01.714 Using 'verbs' RDMA provider 00:23:17.973 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:23:36.062 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:23:36.062 Creating mk/config.mk...done. 00:23:36.062 Creating mk/cc.flags.mk...done. 00:23:36.062 Type 'make' to build. 00:23:36.062 17:20:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:23:36.062 17:20:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:23:36.062 17:20:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:23:36.062 17:20:11 -- common/autotest_common.sh@10 -- $ set +x 00:23:36.062 ************************************ 00:23:36.062 START TEST make 00:23:36.062 ************************************ 00:23:36.062 17:20:11 make -- common/autotest_common.sh@1129 -- $ make -j10 00:23:36.062 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:23:36.062 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:23:36.062 meson setup builddir \ 00:23:36.062 -Dwith-libaio=enabled \ 00:23:36.062 -Dwith-liburing=enabled \ 00:23:36.062 -Dwith-libvfn=disabled \ 00:23:36.062 -Dwith-spdk=disabled \ 00:23:36.062 -Dexamples=false \ 00:23:36.062 -Dtests=false \ 00:23:36.062 -Dtools=false && \ 00:23:36.062 meson compile -C builddir && \ 00:23:36.062 cd -) 00:23:36.062 make[1]: Nothing to be done for 'all'. 00:23:37.001 The Meson build system 00:23:37.001 Version: 1.5.0 00:23:37.001 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:23:37.001 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:37.001 Build type: native build 00:23:37.001 Project name: xnvme 00:23:37.001 Project version: 0.7.5 00:23:37.001 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:37.001 C linker for the host machine: cc ld.bfd 2.40-14 00:23:37.001 Host machine cpu family: x86_64 00:23:37.001 Host machine cpu: x86_64 00:23:37.001 Message: host_machine.system: linux 00:23:37.001 Compiler for C supports arguments -Wno-missing-braces: YES 00:23:37.001 Compiler for C supports arguments -Wno-cast-function-type: YES 00:23:37.001 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:23:37.002 Run-time dependency threads found: YES 00:23:37.002 Has header "setupapi.h" : NO 00:23:37.002 Has header "linux/blkzoned.h" : YES 00:23:37.002 Has header "linux/blkzoned.h" : YES (cached) 00:23:37.002 Has header "libaio.h" : YES 00:23:37.002 Library aio found: YES 00:23:37.002 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:37.002 Run-time dependency liburing found: YES 2.2 00:23:37.002 Dependency libvfn skipped: feature with-libvfn disabled 00:23:37.002 Found CMake: /usr/bin/cmake (3.27.7) 00:23:37.002 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:23:37.002 Subproject spdk : skipped: feature with-spdk disabled 00:23:37.002 Run-time dependency appleframeworks found: NO (tried framework) 00:23:37.002 Run-time dependency appleframeworks found: NO (tried framework) 00:23:37.002 Library rt found: YES 00:23:37.002 Checking for function "clock_gettime" with dependency -lrt: YES 00:23:37.002 Configuring xnvme_config.h using configuration 00:23:37.002 Configuring xnvme.spec using configuration 00:23:37.002 Run-time dependency bash-completion found: YES 2.11 00:23:37.002 Message: Bash-completions: /usr/share/bash-completion/completions 00:23:37.002 Program cp found: YES (/usr/bin/cp) 00:23:37.002 Build targets in project: 3 00:23:37.002 00:23:37.002 xnvme 0.7.5 00:23:37.002 00:23:37.002 Subprojects 00:23:37.002 spdk : NO Feature 'with-spdk' disabled 00:23:37.002 00:23:37.002 User defined options 00:23:37.002 examples : false 00:23:37.002 tests : false 00:23:37.002 tools : false 00:23:37.002 with-libaio : enabled 00:23:37.002 with-liburing: enabled 00:23:37.002 with-libvfn : disabled 00:23:37.002 with-spdk : disabled 00:23:37.002 00:23:37.002 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:37.261 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:23:37.261 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:23:37.261 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:23:37.261 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:23:37.261 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:23:37.261 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:23:37.261 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:23:37.521 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:23:37.521 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:23:37.521 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:23:37.521 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:23:37.521 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:23:37.521 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:23:37.521 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:23:37.521 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:23:37.521 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:23:37.521 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:23:37.521 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:23:37.521 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:23:37.521 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:23:37.521 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:23:37.521 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:23:37.521 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:23:37.521 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:23:37.521 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:23:37.521 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:23:37.521 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:23:37.521 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:23:37.521 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:23:37.782 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:23:37.782 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:23:37.782 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:23:37.782 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:23:37.782 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:23:37.782 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:23:37.782 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:23:37.782 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:23:37.782 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:23:37.782 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:23:37.782 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:23:37.782 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:23:37.782 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:23:37.782 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:23:37.782 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:23:37.782 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:23:37.782 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:23:37.782 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:23:37.782 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:23:37.782 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:23:37.782 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:23:37.782 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:23:37.782 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:23:37.782 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:23:37.782 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:23:37.782 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:23:37.782 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:23:37.782 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:23:37.782 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:23:37.782 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:23:37.782 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:23:37.782 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:23:38.041 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:23:38.041 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:23:38.041 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:23:38.041 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:23:38.042 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:23:38.042 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:23:38.042 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:23:38.042 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:23:38.042 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:23:38.042 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:23:38.042 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:23:38.301 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:23:38.301 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:23:38.561 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:23:38.561 [75/76] Linking static target lib/libxnvme.a 00:23:38.561 [76/76] Linking target lib/libxnvme.so.0.7.5 00:23:38.561 INFO: autodetecting backend as ninja 00:23:38.561 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:38.561 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:23:46.704 The Meson build system 00:23:46.704 Version: 1.5.0 00:23:46.704 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:23:46.704 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:23:46.704 Build type: native build 00:23:46.704 Program cat found: YES (/usr/bin/cat) 00:23:46.704 Project name: DPDK 00:23:46.704 Project version: 24.03.0 00:23:46.704 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:46.704 C linker for the host machine: cc ld.bfd 2.40-14 00:23:46.704 Host machine cpu family: x86_64 00:23:46.704 Host machine cpu: x86_64 00:23:46.704 Message: ## Building in Developer Mode ## 00:23:46.704 Program pkg-config found: YES (/usr/bin/pkg-config) 00:23:46.704 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:23:46.704 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:23:46.704 Program python3 found: YES (/usr/bin/python3) 00:23:46.704 Program cat found: YES (/usr/bin/cat) 00:23:46.704 Compiler for C supports arguments -march=native: YES 00:23:46.704 Checking for size of "void *" : 8 00:23:46.704 Checking for size of "void *" : 8 (cached) 00:23:46.704 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:23:46.704 Library m found: YES 00:23:46.704 Library numa found: YES 00:23:46.704 Has header "numaif.h" : YES 00:23:46.704 Library fdt found: NO 00:23:46.704 Library execinfo found: NO 00:23:46.704 Has header "execinfo.h" : YES 00:23:46.704 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:46.704 Run-time dependency libarchive found: NO (tried pkgconfig) 00:23:46.704 Run-time dependency libbsd found: NO (tried pkgconfig) 00:23:46.704 Run-time dependency jansson found: NO (tried pkgconfig) 00:23:46.704 Run-time dependency openssl found: YES 3.1.1 00:23:46.704 Run-time dependency libpcap found: YES 1.10.4 00:23:46.704 Has header "pcap.h" with dependency libpcap: YES 00:23:46.704 Compiler for C supports arguments -Wcast-qual: YES 00:23:46.704 Compiler for C supports arguments -Wdeprecated: YES 00:23:46.704 Compiler for C supports arguments -Wformat: YES 00:23:46.704 Compiler for C supports arguments -Wformat-nonliteral: NO 00:23:46.704 Compiler for C supports arguments -Wformat-security: NO 00:23:46.704 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:46.704 Compiler for C supports arguments -Wmissing-prototypes: YES 00:23:46.704 Compiler for C supports arguments -Wnested-externs: YES 00:23:46.704 Compiler for C supports arguments -Wold-style-definition: YES 00:23:46.704 Compiler for C supports arguments -Wpointer-arith: YES 00:23:46.704 Compiler for C supports arguments -Wsign-compare: YES 00:23:46.704 Compiler for C supports arguments -Wstrict-prototypes: YES 00:23:46.704 Compiler for C supports arguments -Wundef: YES 00:23:46.704 Compiler for C supports arguments -Wwrite-strings: YES 00:23:46.704 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:23:46.704 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:23:46.704 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:46.704 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:23:46.704 Program objdump found: YES (/usr/bin/objdump) 00:23:46.704 Compiler for C supports arguments -mavx512f: YES 00:23:46.704 Checking if "AVX512 checking" compiles: YES 00:23:46.704 Fetching value of define "__SSE4_2__" : 1 00:23:46.704 Fetching value of define "__AES__" : 1 00:23:46.704 Fetching value of define "__AVX__" : 1 00:23:46.704 Fetching value of define "__AVX2__" : 1 00:23:46.704 Fetching value of define "__AVX512BW__" : 1 00:23:46.704 Fetching value of define "__AVX512CD__" : 1 00:23:46.704 Fetching value of define "__AVX512DQ__" : 1 00:23:46.704 Fetching value of define "__AVX512F__" : 1 00:23:46.704 Fetching value of define "__AVX512VL__" : 1 00:23:46.704 Fetching value of define "__PCLMUL__" : 1 00:23:46.704 Fetching value of define "__RDRND__" : 1 00:23:46.704 Fetching value of define "__RDSEED__" : 1 00:23:46.704 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:23:46.704 Fetching value of define "__znver1__" : (undefined) 00:23:46.704 Fetching value of define "__znver2__" : (undefined) 00:23:46.704 Fetching value of define "__znver3__" : (undefined) 00:23:46.704 Fetching value of define "__znver4__" : (undefined) 00:23:46.704 Library asan found: YES 00:23:46.704 Compiler for C supports arguments -Wno-format-truncation: YES 00:23:46.704 Message: lib/log: Defining dependency "log" 00:23:46.704 Message: lib/kvargs: Defining dependency "kvargs" 00:23:46.704 Message: lib/telemetry: Defining dependency "telemetry" 00:23:46.704 Library rt found: YES 00:23:46.704 Checking for function "getentropy" : NO 00:23:46.704 Message: lib/eal: Defining dependency "eal" 00:23:46.704 Message: lib/ring: Defining dependency "ring" 00:23:46.704 Message: lib/rcu: Defining dependency "rcu" 00:23:46.704 Message: lib/mempool: Defining dependency "mempool" 00:23:46.704 Message: lib/mbuf: Defining dependency "mbuf" 00:23:46.704 Fetching value of define "__PCLMUL__" : 1 (cached) 00:23:46.704 Fetching value of define "__AVX512F__" : 1 (cached) 00:23:46.704 Fetching value of define "__AVX512BW__" : 1 (cached) 00:23:46.704 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:23:46.704 Fetching value of define "__AVX512VL__" : 1 (cached) 00:23:46.704 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:23:46.704 Compiler for C supports arguments -mpclmul: YES 00:23:46.704 Compiler for C supports arguments -maes: YES 00:23:46.704 Compiler for C supports arguments -mavx512f: YES (cached) 00:23:46.704 Compiler for C supports arguments -mavx512bw: YES 00:23:46.704 Compiler for C supports arguments -mavx512dq: YES 00:23:46.704 Compiler for C supports arguments -mavx512vl: YES 00:23:46.704 Compiler for C supports arguments -mvpclmulqdq: YES 00:23:46.704 Compiler for C supports arguments -mavx2: YES 00:23:46.704 Compiler for C supports arguments -mavx: YES 00:23:46.704 Message: lib/net: Defining dependency "net" 00:23:46.704 Message: lib/meter: Defining dependency "meter" 00:23:46.704 Message: lib/ethdev: Defining dependency "ethdev" 00:23:46.704 Message: lib/pci: Defining dependency "pci" 00:23:46.704 Message: lib/cmdline: Defining dependency "cmdline" 00:23:46.704 Message: lib/hash: Defining dependency "hash" 00:23:46.704 Message: lib/timer: Defining dependency "timer" 00:23:46.704 Message: lib/compressdev: Defining dependency "compressdev" 00:23:46.704 Message: lib/cryptodev: Defining dependency "cryptodev" 00:23:46.704 Message: lib/dmadev: Defining dependency "dmadev" 00:23:46.704 Compiler for C supports arguments -Wno-cast-qual: YES 00:23:46.704 Message: lib/power: Defining dependency "power" 00:23:46.704 Message: lib/reorder: Defining dependency "reorder" 00:23:46.704 Message: lib/security: Defining dependency "security" 00:23:46.704 Has header "linux/userfaultfd.h" : YES 00:23:46.704 Has header "linux/vduse.h" : YES 00:23:46.704 Message: lib/vhost: Defining dependency "vhost" 00:23:46.704 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:23:46.704 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:23:46.704 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:23:46.704 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:23:46.704 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:23:46.704 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:23:46.704 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:23:46.704 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:23:46.704 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:23:46.704 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:23:46.704 Program doxygen found: YES (/usr/local/bin/doxygen) 00:23:46.704 Configuring doxy-api-html.conf using configuration 00:23:46.704 Configuring doxy-api-man.conf using configuration 00:23:46.704 Program mandb found: YES (/usr/bin/mandb) 00:23:46.704 Program sphinx-build found: NO 00:23:46.704 Configuring rte_build_config.h using configuration 00:23:46.704 Message: 00:23:46.704 ================= 00:23:46.704 Applications Enabled 00:23:46.704 ================= 00:23:46.704 00:23:46.704 apps: 00:23:46.704 00:23:46.704 00:23:46.705 Message: 00:23:46.705 ================= 00:23:46.705 Libraries Enabled 00:23:46.705 ================= 00:23:46.705 00:23:46.705 libs: 00:23:46.705 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:23:46.705 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:23:46.705 cryptodev, dmadev, power, reorder, security, vhost, 00:23:46.705 00:23:46.705 Message: 00:23:46.705 =============== 00:23:46.705 Drivers Enabled 00:23:46.705 =============== 00:23:46.705 00:23:46.705 common: 00:23:46.705 00:23:46.705 bus: 00:23:46.705 pci, vdev, 00:23:46.705 mempool: 00:23:46.705 ring, 00:23:46.705 dma: 00:23:46.705 00:23:46.705 net: 00:23:46.705 00:23:46.705 crypto: 00:23:46.705 00:23:46.705 compress: 00:23:46.705 00:23:46.705 vdpa: 00:23:46.705 00:23:46.705 00:23:46.705 Message: 00:23:46.705 ================= 00:23:46.705 Content Skipped 00:23:46.705 ================= 00:23:46.705 00:23:46.705 apps: 00:23:46.705 dumpcap: explicitly disabled via build config 00:23:46.705 graph: explicitly disabled via build config 00:23:46.705 pdump: explicitly disabled via build config 00:23:46.705 proc-info: explicitly disabled via build config 00:23:46.705 test-acl: explicitly disabled via build config 00:23:46.705 test-bbdev: explicitly disabled via build config 00:23:46.705 test-cmdline: explicitly disabled via build config 00:23:46.705 test-compress-perf: explicitly disabled via build config 00:23:46.705 test-crypto-perf: explicitly disabled via build config 00:23:46.705 test-dma-perf: explicitly disabled via build config 00:23:46.705 test-eventdev: explicitly disabled via build config 00:23:46.705 test-fib: explicitly disabled via build config 00:23:46.705 test-flow-perf: explicitly disabled via build config 00:23:46.705 test-gpudev: explicitly disabled via build config 00:23:46.705 test-mldev: explicitly disabled via build config 00:23:46.705 test-pipeline: explicitly disabled via build config 00:23:46.705 test-pmd: explicitly disabled via build config 00:23:46.705 test-regex: explicitly disabled via build config 00:23:46.705 test-sad: explicitly disabled via build config 00:23:46.705 test-security-perf: explicitly disabled via build config 00:23:46.705 00:23:46.705 libs: 00:23:46.705 argparse: explicitly disabled via build config 00:23:46.705 metrics: explicitly disabled via build config 00:23:46.705 acl: explicitly disabled via build config 00:23:46.705 bbdev: explicitly disabled via build config 00:23:46.705 bitratestats: explicitly disabled via build config 00:23:46.705 bpf: explicitly disabled via build config 00:23:46.705 cfgfile: explicitly disabled via build config 00:23:46.705 distributor: explicitly disabled via build config 00:23:46.705 efd: explicitly disabled via build config 00:23:46.705 eventdev: explicitly disabled via build config 00:23:46.705 dispatcher: explicitly disabled via build config 00:23:46.705 gpudev: explicitly disabled via build config 00:23:46.705 gro: explicitly disabled via build config 00:23:46.705 gso: explicitly disabled via build config 00:23:46.705 ip_frag: explicitly disabled via build config 00:23:46.705 jobstats: explicitly disabled via build config 00:23:46.705 latencystats: explicitly disabled via build config 00:23:46.705 lpm: explicitly disabled via build config 00:23:46.705 member: explicitly disabled via build config 00:23:46.705 pcapng: explicitly disabled via build config 00:23:46.705 rawdev: explicitly disabled via build config 00:23:46.705 regexdev: explicitly disabled via build config 00:23:46.705 mldev: explicitly disabled via build config 00:23:46.705 rib: explicitly disabled via build config 00:23:46.705 sched: explicitly disabled via build config 00:23:46.705 stack: explicitly disabled via build config 00:23:46.705 ipsec: explicitly disabled via build config 00:23:46.705 pdcp: explicitly disabled via build config 00:23:46.705 fib: explicitly disabled via build config 00:23:46.705 port: explicitly disabled via build config 00:23:46.705 pdump: explicitly disabled via build config 00:23:46.705 table: explicitly disabled via build config 00:23:46.705 pipeline: explicitly disabled via build config 00:23:46.705 graph: explicitly disabled via build config 00:23:46.705 node: explicitly disabled via build config 00:23:46.705 00:23:46.705 drivers: 00:23:46.705 common/cpt: not in enabled drivers build config 00:23:46.705 common/dpaax: not in enabled drivers build config 00:23:46.705 common/iavf: not in enabled drivers build config 00:23:46.705 common/idpf: not in enabled drivers build config 00:23:46.705 common/ionic: not in enabled drivers build config 00:23:46.705 common/mvep: not in enabled drivers build config 00:23:46.705 common/octeontx: not in enabled drivers build config 00:23:46.705 bus/auxiliary: not in enabled drivers build config 00:23:46.705 bus/cdx: not in enabled drivers build config 00:23:46.705 bus/dpaa: not in enabled drivers build config 00:23:46.705 bus/fslmc: not in enabled drivers build config 00:23:46.705 bus/ifpga: not in enabled drivers build config 00:23:46.705 bus/platform: not in enabled drivers build config 00:23:46.705 bus/uacce: not in enabled drivers build config 00:23:46.705 bus/vmbus: not in enabled drivers build config 00:23:46.705 common/cnxk: not in enabled drivers build config 00:23:46.705 common/mlx5: not in enabled drivers build config 00:23:46.705 common/nfp: not in enabled drivers build config 00:23:46.705 common/nitrox: not in enabled drivers build config 00:23:46.705 common/qat: not in enabled drivers build config 00:23:46.705 common/sfc_efx: not in enabled drivers build config 00:23:46.705 mempool/bucket: not in enabled drivers build config 00:23:46.705 mempool/cnxk: not in enabled drivers build config 00:23:46.705 mempool/dpaa: not in enabled drivers build config 00:23:46.705 mempool/dpaa2: not in enabled drivers build config 00:23:46.705 mempool/octeontx: not in enabled drivers build config 00:23:46.705 mempool/stack: not in enabled drivers build config 00:23:46.705 dma/cnxk: not in enabled drivers build config 00:23:46.705 dma/dpaa: not in enabled drivers build config 00:23:46.705 dma/dpaa2: not in enabled drivers build config 00:23:46.705 dma/hisilicon: not in enabled drivers build config 00:23:46.705 dma/idxd: not in enabled drivers build config 00:23:46.705 dma/ioat: not in enabled drivers build config 00:23:46.705 dma/skeleton: not in enabled drivers build config 00:23:46.705 net/af_packet: not in enabled drivers build config 00:23:46.705 net/af_xdp: not in enabled drivers build config 00:23:46.705 net/ark: not in enabled drivers build config 00:23:46.705 net/atlantic: not in enabled drivers build config 00:23:46.705 net/avp: not in enabled drivers build config 00:23:46.705 net/axgbe: not in enabled drivers build config 00:23:46.705 net/bnx2x: not in enabled drivers build config 00:23:46.705 net/bnxt: not in enabled drivers build config 00:23:46.705 net/bonding: not in enabled drivers build config 00:23:46.705 net/cnxk: not in enabled drivers build config 00:23:46.705 net/cpfl: not in enabled drivers build config 00:23:46.705 net/cxgbe: not in enabled drivers build config 00:23:46.705 net/dpaa: not in enabled drivers build config 00:23:46.705 net/dpaa2: not in enabled drivers build config 00:23:46.705 net/e1000: not in enabled drivers build config 00:23:46.705 net/ena: not in enabled drivers build config 00:23:46.705 net/enetc: not in enabled drivers build config 00:23:46.705 net/enetfec: not in enabled drivers build config 00:23:46.705 net/enic: not in enabled drivers build config 00:23:46.705 net/failsafe: not in enabled drivers build config 00:23:46.705 net/fm10k: not in enabled drivers build config 00:23:46.705 net/gve: not in enabled drivers build config 00:23:46.705 net/hinic: not in enabled drivers build config 00:23:46.705 net/hns3: not in enabled drivers build config 00:23:46.705 net/i40e: not in enabled drivers build config 00:23:46.705 net/iavf: not in enabled drivers build config 00:23:46.705 net/ice: not in enabled drivers build config 00:23:46.705 net/idpf: not in enabled drivers build config 00:23:46.705 net/igc: not in enabled drivers build config 00:23:46.705 net/ionic: not in enabled drivers build config 00:23:46.705 net/ipn3ke: not in enabled drivers build config 00:23:46.705 net/ixgbe: not in enabled drivers build config 00:23:46.705 net/mana: not in enabled drivers build config 00:23:46.705 net/memif: not in enabled drivers build config 00:23:46.705 net/mlx4: not in enabled drivers build config 00:23:46.705 net/mlx5: not in enabled drivers build config 00:23:46.705 net/mvneta: not in enabled drivers build config 00:23:46.705 net/mvpp2: not in enabled drivers build config 00:23:46.705 net/netvsc: not in enabled drivers build config 00:23:46.705 net/nfb: not in enabled drivers build config 00:23:46.705 net/nfp: not in enabled drivers build config 00:23:46.705 net/ngbe: not in enabled drivers build config 00:23:46.705 net/null: not in enabled drivers build config 00:23:46.705 net/octeontx: not in enabled drivers build config 00:23:46.705 net/octeon_ep: not in enabled drivers build config 00:23:46.705 net/pcap: not in enabled drivers build config 00:23:46.705 net/pfe: not in enabled drivers build config 00:23:46.705 net/qede: not in enabled drivers build config 00:23:46.705 net/ring: not in enabled drivers build config 00:23:46.705 net/sfc: not in enabled drivers build config 00:23:46.705 net/softnic: not in enabled drivers build config 00:23:46.705 net/tap: not in enabled drivers build config 00:23:46.705 net/thunderx: not in enabled drivers build config 00:23:46.705 net/txgbe: not in enabled drivers build config 00:23:46.705 net/vdev_netvsc: not in enabled drivers build config 00:23:46.705 net/vhost: not in enabled drivers build config 00:23:46.705 net/virtio: not in enabled drivers build config 00:23:46.705 net/vmxnet3: not in enabled drivers build config 00:23:46.705 raw/*: missing internal dependency, "rawdev" 00:23:46.705 crypto/armv8: not in enabled drivers build config 00:23:46.705 crypto/bcmfs: not in enabled drivers build config 00:23:46.705 crypto/caam_jr: not in enabled drivers build config 00:23:46.705 crypto/ccp: not in enabled drivers build config 00:23:46.705 crypto/cnxk: not in enabled drivers build config 00:23:46.705 crypto/dpaa_sec: not in enabled drivers build config 00:23:46.705 crypto/dpaa2_sec: not in enabled drivers build config 00:23:46.705 crypto/ipsec_mb: not in enabled drivers build config 00:23:46.705 crypto/mlx5: not in enabled drivers build config 00:23:46.705 crypto/mvsam: not in enabled drivers build config 00:23:46.705 crypto/nitrox: not in enabled drivers build config 00:23:46.705 crypto/null: not in enabled drivers build config 00:23:46.705 crypto/octeontx: not in enabled drivers build config 00:23:46.705 crypto/openssl: not in enabled drivers build config 00:23:46.705 crypto/scheduler: not in enabled drivers build config 00:23:46.705 crypto/uadk: not in enabled drivers build config 00:23:46.705 crypto/virtio: not in enabled drivers build config 00:23:46.705 compress/isal: not in enabled drivers build config 00:23:46.705 compress/mlx5: not in enabled drivers build config 00:23:46.705 compress/nitrox: not in enabled drivers build config 00:23:46.705 compress/octeontx: not in enabled drivers build config 00:23:46.705 compress/zlib: not in enabled drivers build config 00:23:46.705 regex/*: missing internal dependency, "regexdev" 00:23:46.705 ml/*: missing internal dependency, "mldev" 00:23:46.705 vdpa/ifc: not in enabled drivers build config 00:23:46.705 vdpa/mlx5: not in enabled drivers build config 00:23:46.705 vdpa/nfp: not in enabled drivers build config 00:23:46.705 vdpa/sfc: not in enabled drivers build config 00:23:46.705 event/*: missing internal dependency, "eventdev" 00:23:46.705 baseband/*: missing internal dependency, "bbdev" 00:23:46.705 gpu/*: missing internal dependency, "gpudev" 00:23:46.705 00:23:46.705 00:23:46.705 Build targets in project: 85 00:23:46.705 00:23:46.705 DPDK 24.03.0 00:23:46.705 00:23:46.705 User defined options 00:23:46.705 buildtype : debug 00:23:46.705 default_library : shared 00:23:46.705 libdir : lib 00:23:46.705 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:46.705 b_sanitize : address 00:23:46.705 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:23:46.705 c_link_args : 00:23:46.705 cpu_instruction_set: native 00:23:46.705 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:46.705 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:46.705 enable_docs : false 00:23:46.705 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:23:46.705 enable_kmods : false 00:23:46.705 max_lcores : 128 00:23:46.705 tests : false 00:23:46.705 00:23:46.705 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:46.705 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:46.705 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:46.705 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:46.705 [3/268] Linking static target lib/librte_kvargs.a 00:23:46.705 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:46.705 [5/268] Linking static target lib/librte_log.a 00:23:46.963 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:47.221 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:47.221 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:47.221 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:47.221 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:47.221 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:47.221 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:47.221 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:47.479 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:47.479 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:47.479 [16/268] Linking static target lib/librte_telemetry.a 00:23:47.479 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:47.479 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:47.738 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:47.738 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:47.738 [21/268] Linking target lib/librte_log.so.24.1 00:23:47.996 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:47.996 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:47.996 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:47.996 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:47.996 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:48.255 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:48.255 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:48.255 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:23:48.255 [30/268] Linking target lib/librte_kvargs.so.24.1 00:23:48.255 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:48.255 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:48.255 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:48.514 [34/268] Linking target lib/librte_telemetry.so.24.1 00:23:48.514 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:23:48.514 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:48.514 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:48.514 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:48.773 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:48.773 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:23:48.773 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:48.773 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:48.773 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:48.773 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:48.773 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:48.773 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:49.032 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:49.032 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:49.293 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:49.293 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:49.293 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:49.293 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:49.552 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:49.552 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:49.552 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:49.552 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:49.552 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:49.552 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:49.829 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:49.829 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:49.829 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:49.829 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:49.829 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:50.092 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:50.092 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:50.092 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:50.092 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:50.350 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:50.350 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:50.350 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:50.608 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:50.608 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:50.608 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:23:50.608 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:50.608 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:50.608 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:50.608 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:50.608 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:23:50.866 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:23:50.866 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:23:50.866 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:23:50.866 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:23:51.125 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:23:51.125 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:23:51.125 [85/268] Linking static target lib/librte_eal.a 00:23:51.384 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:23:51.384 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:23:51.384 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:23:51.384 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:23:51.384 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:23:51.384 [91/268] Linking static target lib/librte_ring.a 00:23:51.384 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:23:51.384 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:23:51.384 [94/268] Linking static target lib/librte_mempool.a 00:23:51.642 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:23:51.642 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:23:51.902 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:23:51.902 [98/268] Linking static target lib/librte_rcu.a 00:23:51.902 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:23:51.902 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:23:51.902 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:23:51.902 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:23:52.162 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:23:52.162 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:23:52.421 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:23:52.421 [106/268] Linking static target lib/librte_meter.a 00:23:52.421 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:23:52.421 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:23:52.421 [109/268] Linking static target lib/librte_net.a 00:23:52.421 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:23:52.421 [111/268] Linking static target lib/librte_mbuf.a 00:23:52.421 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:23:52.712 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:23:52.712 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:23:52.712 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:23:52.712 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:23:52.712 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:23:52.712 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:23:53.280 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:23:53.280 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:23:53.280 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:23:53.280 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:23:53.541 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:23:53.541 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:23:53.801 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:23:53.801 [126/268] Linking static target lib/librte_pci.a 00:23:53.801 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:23:54.060 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:23:54.060 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:23:54.060 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:23:54.060 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:23:54.060 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:23:54.060 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:23:54.060 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:23:54.060 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:54.060 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:23:54.060 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:23:54.060 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:23:54.060 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:23:54.324 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:23:54.324 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:23:54.324 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:23:54.324 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:23:54.324 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:23:54.584 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:23:54.584 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:23:54.584 [147/268] Linking static target lib/librte_cmdline.a 00:23:54.844 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:23:54.844 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:23:54.844 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:23:54.844 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:23:54.844 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:23:54.844 [153/268] Linking static target lib/librte_timer.a 00:23:55.104 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:23:55.104 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:23:55.363 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:23:55.363 [157/268] Linking static target lib/librte_ethdev.a 00:23:55.363 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:23:55.623 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:23:55.623 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:23:55.623 [161/268] Linking static target lib/librte_compressdev.a 00:23:55.623 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:23:55.623 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:23:55.882 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:23:55.882 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:23:55.882 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:23:55.882 [167/268] Linking static target lib/librte_hash.a 00:23:55.882 [168/268] Linking static target lib/librte_dmadev.a 00:23:55.882 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:23:56.139 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:23:56.139 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:23:56.139 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:23:56.437 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:23:56.437 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:23:56.695 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:56.695 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:56.695 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:23:56.695 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:23:56.695 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:23:56.695 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:56.954 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:56.954 [182/268] Linking static target lib/librte_cryptodev.a 00:23:56.954 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:23:56.954 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:23:57.521 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:57.521 [186/268] Linking static target lib/librte_power.a 00:23:57.521 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:57.521 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:57.521 [189/268] Linking static target lib/librte_reorder.a 00:23:57.521 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:57.521 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:57.781 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:57.781 [193/268] Linking static target lib/librte_security.a 00:23:58.040 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:58.299 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:58.559 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:58.559 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:58.559 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:58.559 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:58.559 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:58.819 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:23:59.079 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:59.079 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:59.079 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:59.079 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:59.079 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:59.337 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:59.337 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:59.337 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:59.593 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:59.593 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:59.593 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:59.593 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:59.593 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:59.593 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:59.879 [216/268] Linking static target drivers/librte_bus_vdev.a 00:23:59.879 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:59.879 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:59.879 [219/268] Linking static target drivers/librte_bus_pci.a 00:23:59.879 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:59.879 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:24:00.138 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:00.138 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:24:00.138 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:00.138 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:00.138 [226/268] Linking static target drivers/librte_mempool_ring.a 00:24:00.138 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:01.516 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:24:02.461 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:24:02.461 [230/268] Linking target lib/librte_eal.so.24.1 00:24:02.461 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:24:02.734 [232/268] Linking target lib/librte_dmadev.so.24.1 00:24:02.734 [233/268] Linking target lib/librte_meter.so.24.1 00:24:02.734 [234/268] Linking target lib/librte_ring.so.24.1 00:24:02.734 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:24:02.734 [236/268] Linking target lib/librte_pci.so.24.1 00:24:02.734 [237/268] Linking target lib/librte_timer.so.24.1 00:24:02.734 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:24:02.734 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:24:02.734 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:24:02.734 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:24:02.734 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:24:02.734 [243/268] Linking target lib/librte_rcu.so.24.1 00:24:02.734 [244/268] Linking target lib/librte_mempool.so.24.1 00:24:02.734 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:24:02.994 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:24:02.994 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:24:02.994 [248/268] Linking target lib/librte_mbuf.so.24.1 00:24:02.994 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:24:03.253 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:24:03.253 [251/268] Linking target lib/librte_net.so.24.1 00:24:03.253 [252/268] Linking target lib/librte_reorder.so.24.1 00:24:03.253 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:24:03.253 [254/268] Linking target lib/librte_compressdev.so.24.1 00:24:03.512 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:24:03.512 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:24:03.512 [257/268] Linking target lib/librte_cmdline.so.24.1 00:24:03.512 [258/268] Linking target lib/librte_security.so.24.1 00:24:03.512 [259/268] Linking target lib/librte_hash.so.24.1 00:24:03.512 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:24:04.081 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:04.342 [262/268] Linking target lib/librte_ethdev.so.24.1 00:24:04.342 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:24:04.602 [264/268] Linking target lib/librte_power.so.24.1 00:24:04.863 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:24:05.127 [266/268] Linking static target lib/librte_vhost.a 00:24:07.691 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:24:07.691 [268/268] Linking target lib/librte_vhost.so.24.1 00:24:07.691 INFO: autodetecting backend as ninja 00:24:07.691 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:24:25.802 CC lib/log/log.o 00:24:25.802 CC lib/log/log_flags.o 00:24:25.802 CC lib/log/log_deprecated.o 00:24:25.802 CC lib/ut/ut.o 00:24:25.802 CC lib/ut_mock/mock.o 00:24:25.802 LIB libspdk_log.a 00:24:25.802 LIB libspdk_ut.a 00:24:25.802 LIB libspdk_ut_mock.a 00:24:25.802 SO libspdk_ut.so.2.0 00:24:25.802 SO libspdk_log.so.7.1 00:24:25.802 SO libspdk_ut_mock.so.6.0 00:24:26.061 SYMLINK libspdk_ut.so 00:24:26.061 SYMLINK libspdk_log.so 00:24:26.061 SYMLINK libspdk_ut_mock.so 00:24:26.061 CXX lib/trace_parser/trace.o 00:24:26.061 CC lib/dma/dma.o 00:24:26.321 CC lib/ioat/ioat.o 00:24:26.321 CC lib/util/base64.o 00:24:26.321 CC lib/util/bit_array.o 00:24:26.321 CC lib/util/cpuset.o 00:24:26.321 CC lib/util/crc32.o 00:24:26.321 CC lib/util/crc16.o 00:24:26.321 CC lib/util/crc32c.o 00:24:26.321 CC lib/vfio_user/host/vfio_user_pci.o 00:24:26.321 CC lib/vfio_user/host/vfio_user.o 00:24:26.321 CC lib/util/crc32_ieee.o 00:24:26.321 CC lib/util/crc64.o 00:24:26.321 LIB libspdk_dma.a 00:24:26.321 CC lib/util/dif.o 00:24:26.321 SO libspdk_dma.so.5.0 00:24:26.580 CC lib/util/fd.o 00:24:26.580 CC lib/util/fd_group.o 00:24:26.580 SYMLINK libspdk_dma.so 00:24:26.580 CC lib/util/file.o 00:24:26.580 CC lib/util/hexlify.o 00:24:26.580 CC lib/util/iov.o 00:24:26.580 LIB libspdk_ioat.a 00:24:26.580 SO libspdk_ioat.so.7.0 00:24:26.580 LIB libspdk_vfio_user.a 00:24:26.580 CC lib/util/math.o 00:24:26.580 CC lib/util/net.o 00:24:26.580 SYMLINK libspdk_ioat.so 00:24:26.580 CC lib/util/pipe.o 00:24:26.580 SO libspdk_vfio_user.so.5.0 00:24:26.580 CC lib/util/strerror_tls.o 00:24:26.581 CC lib/util/string.o 00:24:26.581 SYMLINK libspdk_vfio_user.so 00:24:26.581 CC lib/util/uuid.o 00:24:26.581 CC lib/util/xor.o 00:24:26.840 CC lib/util/zipf.o 00:24:26.840 CC lib/util/md5.o 00:24:27.099 LIB libspdk_util.a 00:24:27.358 SO libspdk_util.so.10.1 00:24:27.358 LIB libspdk_trace_parser.a 00:24:27.358 SO libspdk_trace_parser.so.6.0 00:24:27.358 SYMLINK libspdk_util.so 00:24:27.358 SYMLINK libspdk_trace_parser.so 00:24:27.615 CC lib/vmd/vmd.o 00:24:27.615 CC lib/vmd/led.o 00:24:27.615 CC lib/conf/conf.o 00:24:27.615 CC lib/idxd/idxd_user.o 00:24:27.615 CC lib/idxd/idxd.o 00:24:27.615 CC lib/idxd/idxd_kernel.o 00:24:27.615 CC lib/rdma_utils/rdma_utils.o 00:24:27.615 CC lib/env_dpdk/memory.o 00:24:27.615 CC lib/env_dpdk/env.o 00:24:27.615 CC lib/json/json_parse.o 00:24:27.615 CC lib/env_dpdk/pci.o 00:24:27.615 CC lib/env_dpdk/init.o 00:24:27.874 LIB libspdk_conf.a 00:24:27.874 CC lib/json/json_util.o 00:24:27.874 SO libspdk_conf.so.6.0 00:24:27.874 CC lib/json/json_write.o 00:24:27.874 LIB libspdk_rdma_utils.a 00:24:27.874 SYMLINK libspdk_conf.so 00:24:27.874 CC lib/env_dpdk/threads.o 00:24:27.874 SO libspdk_rdma_utils.so.1.0 00:24:27.874 SYMLINK libspdk_rdma_utils.so 00:24:27.874 CC lib/env_dpdk/pci_ioat.o 00:24:28.132 CC lib/env_dpdk/pci_virtio.o 00:24:28.132 CC lib/env_dpdk/pci_vmd.o 00:24:28.132 CC lib/env_dpdk/pci_idxd.o 00:24:28.132 CC lib/env_dpdk/pci_event.o 00:24:28.132 LIB libspdk_json.a 00:24:28.132 CC lib/env_dpdk/sigbus_handler.o 00:24:28.132 CC lib/env_dpdk/pci_dpdk.o 00:24:28.132 CC lib/env_dpdk/pci_dpdk_2207.o 00:24:28.132 SO libspdk_json.so.6.0 00:24:28.132 CC lib/env_dpdk/pci_dpdk_2211.o 00:24:28.132 SYMLINK libspdk_json.so 00:24:28.404 LIB libspdk_idxd.a 00:24:28.404 LIB libspdk_vmd.a 00:24:28.404 SO libspdk_idxd.so.12.1 00:24:28.404 SO libspdk_vmd.so.6.0 00:24:28.404 SYMLINK libspdk_idxd.so 00:24:28.404 SYMLINK libspdk_vmd.so 00:24:28.404 CC lib/rdma_provider/rdma_provider_verbs.o 00:24:28.404 CC lib/rdma_provider/common.o 00:24:28.404 CC lib/jsonrpc/jsonrpc_server.o 00:24:28.404 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:24:28.404 CC lib/jsonrpc/jsonrpc_client.o 00:24:28.404 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:24:28.664 LIB libspdk_rdma_provider.a 00:24:28.664 SO libspdk_rdma_provider.so.7.0 00:24:28.664 LIB libspdk_jsonrpc.a 00:24:28.664 SYMLINK libspdk_rdma_provider.so 00:24:28.923 SO libspdk_jsonrpc.so.6.0 00:24:28.923 SYMLINK libspdk_jsonrpc.so 00:24:29.182 CC lib/rpc/rpc.o 00:24:29.182 LIB libspdk_env_dpdk.a 00:24:29.442 SO libspdk_env_dpdk.so.15.1 00:24:29.442 LIB libspdk_rpc.a 00:24:29.442 SYMLINK libspdk_env_dpdk.so 00:24:29.442 SO libspdk_rpc.so.6.0 00:24:29.700 SYMLINK libspdk_rpc.so 00:24:29.959 CC lib/notify/notify.o 00:24:29.959 CC lib/notify/notify_rpc.o 00:24:29.959 CC lib/trace/trace.o 00:24:29.959 CC lib/trace/trace_flags.o 00:24:29.959 CC lib/trace/trace_rpc.o 00:24:29.959 CC lib/keyring/keyring.o 00:24:29.959 CC lib/keyring/keyring_rpc.o 00:24:30.218 LIB libspdk_notify.a 00:24:30.218 SO libspdk_notify.so.6.0 00:24:30.218 SYMLINK libspdk_notify.so 00:24:30.218 LIB libspdk_keyring.a 00:24:30.218 LIB libspdk_trace.a 00:24:30.218 SO libspdk_keyring.so.2.0 00:24:30.218 SO libspdk_trace.so.11.0 00:24:30.477 SYMLINK libspdk_keyring.so 00:24:30.477 SYMLINK libspdk_trace.so 00:24:30.736 CC lib/sock/sock.o 00:24:30.736 CC lib/sock/sock_rpc.o 00:24:30.736 CC lib/thread/thread.o 00:24:30.736 CC lib/thread/iobuf.o 00:24:31.305 LIB libspdk_sock.a 00:24:31.305 SO libspdk_sock.so.10.0 00:24:31.305 SYMLINK libspdk_sock.so 00:24:31.891 CC lib/nvme/nvme_ctrlr_cmd.o 00:24:31.891 CC lib/nvme/nvme_ctrlr.o 00:24:31.891 CC lib/nvme/nvme_ns.o 00:24:31.891 CC lib/nvme/nvme_fabric.o 00:24:31.891 CC lib/nvme/nvme_pcie_common.o 00:24:31.891 CC lib/nvme/nvme_ns_cmd.o 00:24:31.891 CC lib/nvme/nvme_qpair.o 00:24:31.891 CC lib/nvme/nvme_pcie.o 00:24:31.891 CC lib/nvme/nvme.o 00:24:32.463 CC lib/nvme/nvme_quirks.o 00:24:32.463 LIB libspdk_thread.a 00:24:32.463 CC lib/nvme/nvme_transport.o 00:24:32.463 SO libspdk_thread.so.11.0 00:24:32.463 CC lib/nvme/nvme_discovery.o 00:24:32.463 SYMLINK libspdk_thread.so 00:24:32.463 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:24:32.722 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:24:32.722 CC lib/nvme/nvme_tcp.o 00:24:32.722 CC lib/accel/accel.o 00:24:32.722 CC lib/nvme/nvme_opal.o 00:24:32.980 CC lib/nvme/nvme_io_msg.o 00:24:32.980 CC lib/nvme/nvme_poll_group.o 00:24:33.238 CC lib/nvme/nvme_zns.o 00:24:33.238 CC lib/nvme/nvme_stubs.o 00:24:33.238 CC lib/nvme/nvme_auth.o 00:24:33.238 CC lib/nvme/nvme_cuse.o 00:24:33.238 CC lib/nvme/nvme_rdma.o 00:24:33.804 CC lib/accel/accel_rpc.o 00:24:33.804 CC lib/accel/accel_sw.o 00:24:33.804 CC lib/blob/blobstore.o 00:24:34.064 CC lib/init/json_config.o 00:24:34.064 CC lib/virtio/virtio.o 00:24:34.064 LIB libspdk_accel.a 00:24:34.064 SO libspdk_accel.so.16.0 00:24:34.064 CC lib/fsdev/fsdev.o 00:24:34.324 CC lib/fsdev/fsdev_io.o 00:24:34.324 SYMLINK libspdk_accel.so 00:24:34.324 CC lib/init/subsystem.o 00:24:34.324 CC lib/init/subsystem_rpc.o 00:24:34.324 CC lib/init/rpc.o 00:24:34.325 CC lib/fsdev/fsdev_rpc.o 00:24:34.325 CC lib/virtio/virtio_vhost_user.o 00:24:34.325 CC lib/virtio/virtio_vfio_user.o 00:24:34.325 CC lib/virtio/virtio_pci.o 00:24:34.325 CC lib/blob/request.o 00:24:34.584 LIB libspdk_init.a 00:24:34.584 SO libspdk_init.so.6.0 00:24:34.584 CC lib/blob/zeroes.o 00:24:34.584 SYMLINK libspdk_init.so 00:24:34.584 CC lib/blob/blob_bs_dev.o 00:24:34.584 CC lib/bdev/bdev.o 00:24:34.843 CC lib/bdev/bdev_rpc.o 00:24:34.843 LIB libspdk_virtio.a 00:24:34.843 CC lib/bdev/bdev_zone.o 00:24:34.843 CC lib/bdev/part.o 00:24:34.843 SO libspdk_virtio.so.7.0 00:24:34.843 LIB libspdk_nvme.a 00:24:34.843 CC lib/event/app.o 00:24:34.843 SYMLINK libspdk_virtio.so 00:24:34.843 CC lib/event/reactor.o 00:24:34.843 CC lib/event/log_rpc.o 00:24:34.843 LIB libspdk_fsdev.a 00:24:35.102 SO libspdk_nvme.so.15.0 00:24:35.102 CC lib/bdev/scsi_nvme.o 00:24:35.102 SO libspdk_fsdev.so.2.0 00:24:35.102 CC lib/event/app_rpc.o 00:24:35.102 CC lib/event/scheduler_static.o 00:24:35.102 SYMLINK libspdk_fsdev.so 00:24:35.361 SYMLINK libspdk_nvme.so 00:24:35.361 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:24:35.361 LIB libspdk_event.a 00:24:35.361 SO libspdk_event.so.14.0 00:24:35.620 SYMLINK libspdk_event.so 00:24:35.879 LIB libspdk_fuse_dispatcher.a 00:24:36.176 SO libspdk_fuse_dispatcher.so.1.0 00:24:36.176 SYMLINK libspdk_fuse_dispatcher.so 00:24:37.565 LIB libspdk_blob.a 00:24:37.565 SO libspdk_blob.so.12.0 00:24:37.831 LIB libspdk_bdev.a 00:24:37.831 SYMLINK libspdk_blob.so 00:24:37.831 SO libspdk_bdev.so.17.0 00:24:38.089 SYMLINK libspdk_bdev.so 00:24:38.089 CC lib/lvol/lvol.o 00:24:38.089 CC lib/blobfs/tree.o 00:24:38.089 CC lib/blobfs/blobfs.o 00:24:38.089 CC lib/scsi/dev.o 00:24:38.089 CC lib/scsi/lun.o 00:24:38.089 CC lib/scsi/port.o 00:24:38.089 CC lib/ftl/ftl_core.o 00:24:38.089 CC lib/nbd/nbd.o 00:24:38.089 CC lib/nvmf/ctrlr.o 00:24:38.089 CC lib/ublk/ublk.o 00:24:38.348 CC lib/nvmf/ctrlr_discovery.o 00:24:38.348 CC lib/nvmf/ctrlr_bdev.o 00:24:38.348 CC lib/scsi/scsi.o 00:24:38.607 CC lib/scsi/scsi_bdev.o 00:24:38.607 CC lib/nvmf/subsystem.o 00:24:38.607 CC lib/ftl/ftl_init.o 00:24:38.607 CC lib/nbd/nbd_rpc.o 00:24:38.865 LIB libspdk_nbd.a 00:24:38.865 CC lib/ftl/ftl_layout.o 00:24:38.865 CC lib/nvmf/nvmf.o 00:24:38.865 SO libspdk_nbd.so.7.0 00:24:38.865 SYMLINK libspdk_nbd.so 00:24:38.865 CC lib/nvmf/nvmf_rpc.o 00:24:38.865 CC lib/ublk/ublk_rpc.o 00:24:39.123 LIB libspdk_blobfs.a 00:24:39.123 CC lib/scsi/scsi_pr.o 00:24:39.123 SO libspdk_blobfs.so.11.0 00:24:39.123 LIB libspdk_ublk.a 00:24:39.123 SO libspdk_ublk.so.3.0 00:24:39.123 SYMLINK libspdk_blobfs.so 00:24:39.123 CC lib/scsi/scsi_rpc.o 00:24:39.123 CC lib/ftl/ftl_debug.o 00:24:39.123 SYMLINK libspdk_ublk.so 00:24:39.123 LIB libspdk_lvol.a 00:24:39.123 CC lib/scsi/task.o 00:24:39.123 CC lib/nvmf/transport.o 00:24:39.381 SO libspdk_lvol.so.11.0 00:24:39.381 CC lib/nvmf/tcp.o 00:24:39.381 SYMLINK libspdk_lvol.so 00:24:39.381 CC lib/nvmf/stubs.o 00:24:39.381 CC lib/ftl/ftl_io.o 00:24:39.381 CC lib/nvmf/mdns_server.o 00:24:39.381 LIB libspdk_scsi.a 00:24:39.640 SO libspdk_scsi.so.9.0 00:24:39.640 SYMLINK libspdk_scsi.so 00:24:39.640 CC lib/nvmf/rdma.o 00:24:39.640 CC lib/ftl/ftl_sb.o 00:24:39.908 CC lib/nvmf/auth.o 00:24:39.908 CC lib/ftl/ftl_l2p.o 00:24:39.908 CC lib/ftl/ftl_l2p_flat.o 00:24:40.168 CC lib/iscsi/conn.o 00:24:40.168 CC lib/vhost/vhost.o 00:24:40.168 CC lib/vhost/vhost_rpc.o 00:24:40.168 CC lib/iscsi/init_grp.o 00:24:40.168 CC lib/iscsi/iscsi.o 00:24:40.429 CC lib/ftl/ftl_nv_cache.o 00:24:40.429 CC lib/iscsi/param.o 00:24:40.429 CC lib/iscsi/portal_grp.o 00:24:40.688 CC lib/iscsi/tgt_node.o 00:24:40.947 CC lib/iscsi/iscsi_subsystem.o 00:24:40.947 CC lib/iscsi/iscsi_rpc.o 00:24:40.947 CC lib/iscsi/task.o 00:24:40.947 CC lib/ftl/ftl_band.o 00:24:41.206 CC lib/ftl/ftl_band_ops.o 00:24:41.206 CC lib/vhost/vhost_scsi.o 00:24:41.206 CC lib/ftl/ftl_writer.o 00:24:41.464 CC lib/ftl/ftl_rq.o 00:24:41.464 CC lib/ftl/ftl_reloc.o 00:24:41.464 CC lib/vhost/vhost_blk.o 00:24:41.464 CC lib/ftl/ftl_l2p_cache.o 00:24:41.464 CC lib/ftl/ftl_p2l.o 00:24:41.464 CC lib/ftl/ftl_p2l_log.o 00:24:41.723 CC lib/ftl/mngt/ftl_mngt.o 00:24:41.723 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:41.723 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:41.980 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:41.980 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:41.980 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:41.980 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:41.980 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:41.981 LIB libspdk_iscsi.a 00:24:42.238 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:42.238 CC lib/vhost/rte_vhost_user.o 00:24:42.238 SO libspdk_iscsi.so.8.0 00:24:42.238 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:42.238 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:42.238 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:42.238 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:42.238 CC lib/ftl/utils/ftl_conf.o 00:24:42.238 SYMLINK libspdk_iscsi.so 00:24:42.238 CC lib/ftl/utils/ftl_md.o 00:24:42.497 CC lib/ftl/utils/ftl_mempool.o 00:24:42.497 LIB libspdk_nvmf.a 00:24:42.497 CC lib/ftl/utils/ftl_bitmap.o 00:24:42.497 CC lib/ftl/utils/ftl_property.o 00:24:42.497 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:42.497 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:42.497 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:42.497 SO libspdk_nvmf.so.20.0 00:24:42.497 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:42.755 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:42.755 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:42.755 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:24:42.755 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:42.755 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:42.755 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:42.755 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:42.755 SYMLINK libspdk_nvmf.so 00:24:42.755 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:24:43.013 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:24:43.013 CC lib/ftl/base/ftl_base_dev.o 00:24:43.013 CC lib/ftl/base/ftl_base_bdev.o 00:24:43.013 CC lib/ftl/ftl_trace.o 00:24:43.274 LIB libspdk_ftl.a 00:24:43.274 LIB libspdk_vhost.a 00:24:43.534 SO libspdk_vhost.so.8.0 00:24:43.534 SO libspdk_ftl.so.9.0 00:24:43.534 SYMLINK libspdk_vhost.so 00:24:43.794 SYMLINK libspdk_ftl.so 00:24:44.053 CC module/env_dpdk/env_dpdk_rpc.o 00:24:44.314 CC module/blob/bdev/blob_bdev.o 00:24:44.314 CC module/sock/posix/posix.o 00:24:44.314 CC module/accel/dsa/accel_dsa.o 00:24:44.314 CC module/accel/iaa/accel_iaa.o 00:24:44.314 CC module/keyring/file/keyring.o 00:24:44.314 CC module/accel/error/accel_error.o 00:24:44.314 CC module/accel/ioat/accel_ioat.o 00:24:44.314 CC module/fsdev/aio/fsdev_aio.o 00:24:44.314 CC module/scheduler/dynamic/scheduler_dynamic.o 00:24:44.314 LIB libspdk_env_dpdk_rpc.a 00:24:44.314 SO libspdk_env_dpdk_rpc.so.6.0 00:24:44.314 CC module/keyring/file/keyring_rpc.o 00:24:44.314 SYMLINK libspdk_env_dpdk_rpc.so 00:24:44.314 CC module/fsdev/aio/fsdev_aio_rpc.o 00:24:44.314 CC module/accel/ioat/accel_ioat_rpc.o 00:24:44.314 CC module/accel/iaa/accel_iaa_rpc.o 00:24:44.573 CC module/accel/error/accel_error_rpc.o 00:24:44.573 LIB libspdk_scheduler_dynamic.a 00:24:44.573 SO libspdk_scheduler_dynamic.so.4.0 00:24:44.573 LIB libspdk_keyring_file.a 00:24:44.573 CC module/accel/dsa/accel_dsa_rpc.o 00:24:44.573 SO libspdk_keyring_file.so.2.0 00:24:44.573 SYMLINK libspdk_scheduler_dynamic.so 00:24:44.573 LIB libspdk_accel_ioat.a 00:24:44.573 LIB libspdk_accel_iaa.a 00:24:44.573 LIB libspdk_blob_bdev.a 00:24:44.573 LIB libspdk_accel_error.a 00:24:44.573 SYMLINK libspdk_keyring_file.so 00:24:44.573 SO libspdk_accel_iaa.so.3.0 00:24:44.573 SO libspdk_accel_ioat.so.6.0 00:24:44.573 SO libspdk_blob_bdev.so.12.0 00:24:44.573 SO libspdk_accel_error.so.2.0 00:24:44.573 LIB libspdk_accel_dsa.a 00:24:44.833 SYMLINK libspdk_accel_ioat.so 00:24:44.833 SYMLINK libspdk_accel_iaa.so 00:24:44.833 SYMLINK libspdk_blob_bdev.so 00:24:44.833 CC module/fsdev/aio/linux_aio_mgr.o 00:24:44.833 SYMLINK libspdk_accel_error.so 00:24:44.833 SO libspdk_accel_dsa.so.5.0 00:24:44.833 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:24:44.833 CC module/scheduler/gscheduler/gscheduler.o 00:24:44.833 SYMLINK libspdk_accel_dsa.so 00:24:44.833 CC module/keyring/linux/keyring.o 00:24:44.833 CC module/keyring/linux/keyring_rpc.o 00:24:44.833 LIB libspdk_scheduler_dpdk_governor.a 00:24:44.833 LIB libspdk_scheduler_gscheduler.a 00:24:45.091 LIB libspdk_keyring_linux.a 00:24:45.091 CC module/bdev/error/vbdev_error.o 00:24:45.091 SO libspdk_scheduler_gscheduler.so.4.0 00:24:45.091 SO libspdk_scheduler_dpdk_governor.so.4.0 00:24:45.091 CC module/bdev/delay/vbdev_delay.o 00:24:45.091 CC module/blobfs/bdev/blobfs_bdev.o 00:24:45.091 SO libspdk_keyring_linux.so.1.0 00:24:45.091 SYMLINK libspdk_scheduler_gscheduler.so 00:24:45.091 SYMLINK libspdk_scheduler_dpdk_governor.so 00:24:45.091 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:24:45.091 CC module/bdev/error/vbdev_error_rpc.o 00:24:45.091 LIB libspdk_fsdev_aio.a 00:24:45.091 SYMLINK libspdk_keyring_linux.so 00:24:45.091 CC module/bdev/delay/vbdev_delay_rpc.o 00:24:45.091 CC module/bdev/gpt/gpt.o 00:24:45.091 SO libspdk_fsdev_aio.so.1.0 00:24:45.091 CC module/bdev/lvol/vbdev_lvol.o 00:24:45.091 SYMLINK libspdk_fsdev_aio.so 00:24:45.091 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:24:45.349 CC module/bdev/gpt/vbdev_gpt.o 00:24:45.349 LIB libspdk_blobfs_bdev.a 00:24:45.349 LIB libspdk_sock_posix.a 00:24:45.349 SO libspdk_blobfs_bdev.so.6.0 00:24:45.349 LIB libspdk_bdev_error.a 00:24:45.349 SO libspdk_sock_posix.so.6.0 00:24:45.349 SYMLINK libspdk_blobfs_bdev.so 00:24:45.349 SO libspdk_bdev_error.so.6.0 00:24:45.349 CC module/bdev/malloc/bdev_malloc.o 00:24:45.349 SYMLINK libspdk_sock_posix.so 00:24:45.349 SYMLINK libspdk_bdev_error.so 00:24:45.349 CC module/bdev/malloc/bdev_malloc_rpc.o 00:24:45.349 LIB libspdk_bdev_delay.a 00:24:45.349 CC module/bdev/null/bdev_null.o 00:24:45.349 SO libspdk_bdev_delay.so.6.0 00:24:45.606 CC module/bdev/nvme/bdev_nvme.o 00:24:45.606 CC module/bdev/passthru/vbdev_passthru.o 00:24:45.606 SYMLINK libspdk_bdev_delay.so 00:24:45.606 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:45.606 LIB libspdk_bdev_gpt.a 00:24:45.606 CC module/bdev/raid/bdev_raid.o 00:24:45.606 SO libspdk_bdev_gpt.so.6.0 00:24:45.606 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:45.606 CC module/bdev/nvme/nvme_rpc.o 00:24:45.606 SYMLINK libspdk_bdev_gpt.so 00:24:45.606 CC module/bdev/nvme/bdev_mdns_client.o 00:24:45.606 LIB libspdk_bdev_lvol.a 00:24:45.606 CC module/bdev/raid/bdev_raid_rpc.o 00:24:45.865 SO libspdk_bdev_lvol.so.6.0 00:24:45.865 CC module/bdev/null/bdev_null_rpc.o 00:24:45.865 CC module/bdev/nvme/vbdev_opal.o 00:24:45.865 SYMLINK libspdk_bdev_lvol.so 00:24:45.865 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:45.865 LIB libspdk_bdev_malloc.a 00:24:45.865 LIB libspdk_bdev_passthru.a 00:24:45.865 SO libspdk_bdev_malloc.so.6.0 00:24:45.865 SO libspdk_bdev_passthru.so.6.0 00:24:45.865 LIB libspdk_bdev_null.a 00:24:46.123 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:46.123 SO libspdk_bdev_null.so.6.0 00:24:46.123 SYMLINK libspdk_bdev_passthru.so 00:24:46.123 SYMLINK libspdk_bdev_malloc.so 00:24:46.123 CC module/bdev/raid/bdev_raid_sb.o 00:24:46.123 CC module/bdev/split/vbdev_split.o 00:24:46.123 SYMLINK libspdk_bdev_null.so 00:24:46.123 CC module/bdev/split/vbdev_split_rpc.o 00:24:46.123 CC module/bdev/zone_block/vbdev_zone_block.o 00:24:46.123 CC module/bdev/xnvme/bdev_xnvme.o 00:24:46.123 CC module/bdev/aio/bdev_aio.o 00:24:46.381 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:24:46.381 CC module/bdev/raid/raid0.o 00:24:46.381 LIB libspdk_bdev_split.a 00:24:46.381 CC module/bdev/ftl/bdev_ftl.o 00:24:46.381 SO libspdk_bdev_split.so.6.0 00:24:46.381 SYMLINK libspdk_bdev_split.so 00:24:46.381 CC module/bdev/raid/raid1.o 00:24:46.381 CC module/bdev/raid/concat.o 00:24:46.381 CC module/bdev/iscsi/bdev_iscsi.o 00:24:46.640 LIB libspdk_bdev_xnvme.a 00:24:46.640 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:46.640 SO libspdk_bdev_xnvme.so.3.0 00:24:46.640 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:46.640 SYMLINK libspdk_bdev_xnvme.so 00:24:46.640 CC module/bdev/aio/bdev_aio_rpc.o 00:24:46.640 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:24:46.640 LIB libspdk_bdev_zone_block.a 00:24:46.640 SO libspdk_bdev_zone_block.so.6.0 00:24:46.640 LIB libspdk_bdev_raid.a 00:24:46.899 SYMLINK libspdk_bdev_zone_block.so 00:24:46.899 LIB libspdk_bdev_ftl.a 00:24:46.899 SO libspdk_bdev_raid.so.6.0 00:24:46.899 LIB libspdk_bdev_aio.a 00:24:46.899 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:46.899 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:46.899 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:46.899 SO libspdk_bdev_ftl.so.6.0 00:24:46.899 SO libspdk_bdev_aio.so.6.0 00:24:46.899 LIB libspdk_bdev_iscsi.a 00:24:46.899 SYMLINK libspdk_bdev_raid.so 00:24:46.899 SYMLINK libspdk_bdev_ftl.so 00:24:46.899 SO libspdk_bdev_iscsi.so.6.0 00:24:46.899 SYMLINK libspdk_bdev_aio.so 00:24:46.899 SYMLINK libspdk_bdev_iscsi.so 00:24:47.467 LIB libspdk_bdev_virtio.a 00:24:47.467 SO libspdk_bdev_virtio.so.6.0 00:24:47.726 SYMLINK libspdk_bdev_virtio.so 00:24:48.294 LIB libspdk_bdev_nvme.a 00:24:48.554 SO libspdk_bdev_nvme.so.7.1 00:24:48.554 SYMLINK libspdk_bdev_nvme.so 00:24:49.123 CC module/event/subsystems/iobuf/iobuf.o 00:24:49.123 CC module/event/subsystems/vmd/vmd.o 00:24:49.123 CC module/event/subsystems/scheduler/scheduler.o 00:24:49.123 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:49.123 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:49.123 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:49.123 CC module/event/subsystems/fsdev/fsdev.o 00:24:49.123 CC module/event/subsystems/keyring/keyring.o 00:24:49.123 CC module/event/subsystems/sock/sock.o 00:24:49.383 LIB libspdk_event_keyring.a 00:24:49.383 LIB libspdk_event_fsdev.a 00:24:49.383 LIB libspdk_event_scheduler.a 00:24:49.383 LIB libspdk_event_iobuf.a 00:24:49.383 LIB libspdk_event_vhost_blk.a 00:24:49.383 LIB libspdk_event_vmd.a 00:24:49.383 SO libspdk_event_fsdev.so.1.0 00:24:49.383 SO libspdk_event_scheduler.so.4.0 00:24:49.383 SO libspdk_event_keyring.so.1.0 00:24:49.383 SO libspdk_event_iobuf.so.3.0 00:24:49.383 LIB libspdk_event_sock.a 00:24:49.383 SO libspdk_event_vhost_blk.so.3.0 00:24:49.383 SO libspdk_event_vmd.so.6.0 00:24:49.383 SO libspdk_event_sock.so.5.0 00:24:49.383 SYMLINK libspdk_event_fsdev.so 00:24:49.383 SYMLINK libspdk_event_scheduler.so 00:24:49.383 SYMLINK libspdk_event_keyring.so 00:24:49.383 SYMLINK libspdk_event_iobuf.so 00:24:49.383 SYMLINK libspdk_event_vhost_blk.so 00:24:49.383 SYMLINK libspdk_event_vmd.so 00:24:49.383 SYMLINK libspdk_event_sock.so 00:24:49.642 CC module/event/subsystems/accel/accel.o 00:24:49.901 LIB libspdk_event_accel.a 00:24:49.901 SO libspdk_event_accel.so.6.0 00:24:49.901 SYMLINK libspdk_event_accel.so 00:24:50.469 CC module/event/subsystems/bdev/bdev.o 00:24:50.469 LIB libspdk_event_bdev.a 00:24:50.469 SO libspdk_event_bdev.so.6.0 00:24:50.729 SYMLINK libspdk_event_bdev.so 00:24:50.987 CC module/event/subsystems/scsi/scsi.o 00:24:50.987 CC module/event/subsystems/nbd/nbd.o 00:24:50.987 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:50.987 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:50.987 CC module/event/subsystems/ublk/ublk.o 00:24:50.987 LIB libspdk_event_scsi.a 00:24:51.246 LIB libspdk_event_nbd.a 00:24:51.246 SO libspdk_event_scsi.so.6.0 00:24:51.246 LIB libspdk_event_ublk.a 00:24:51.246 SO libspdk_event_nbd.so.6.0 00:24:51.246 SO libspdk_event_ublk.so.3.0 00:24:51.246 SYMLINK libspdk_event_scsi.so 00:24:51.246 SYMLINK libspdk_event_ublk.so 00:24:51.246 SYMLINK libspdk_event_nbd.so 00:24:51.246 LIB libspdk_event_nvmf.a 00:24:51.246 SO libspdk_event_nvmf.so.6.0 00:24:51.246 SYMLINK libspdk_event_nvmf.so 00:24:51.557 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:24:51.557 CC module/event/subsystems/iscsi/iscsi.o 00:24:51.557 LIB libspdk_event_vhost_scsi.a 00:24:51.853 SO libspdk_event_vhost_scsi.so.3.0 00:24:51.853 LIB libspdk_event_iscsi.a 00:24:51.853 SYMLINK libspdk_event_vhost_scsi.so 00:24:51.853 SO libspdk_event_iscsi.so.6.0 00:24:51.853 SYMLINK libspdk_event_iscsi.so 00:24:52.112 SO libspdk.so.6.0 00:24:52.112 SYMLINK libspdk.so 00:24:52.372 CC app/trace_record/trace_record.o 00:24:52.372 CC app/spdk_lspci/spdk_lspci.o 00:24:52.372 CXX app/trace/trace.o 00:24:52.372 CC examples/interrupt_tgt/interrupt_tgt.o 00:24:52.372 CC app/iscsi_tgt/iscsi_tgt.o 00:24:52.372 CC examples/util/zipf/zipf.o 00:24:52.372 CC examples/ioat/perf/perf.o 00:24:52.372 CC app/nvmf_tgt/nvmf_main.o 00:24:52.372 CC test/thread/poller_perf/poller_perf.o 00:24:52.372 CC app/spdk_tgt/spdk_tgt.o 00:24:52.372 LINK spdk_lspci 00:24:52.629 LINK poller_perf 00:24:52.629 LINK iscsi_tgt 00:24:52.629 LINK nvmf_tgt 00:24:52.629 LINK interrupt_tgt 00:24:52.629 LINK spdk_trace_record 00:24:52.629 LINK zipf 00:24:52.629 LINK ioat_perf 00:24:52.629 LINK spdk_tgt 00:24:52.888 CC app/spdk_nvme_perf/perf.o 00:24:52.888 CC examples/ioat/verify/verify.o 00:24:52.888 CC app/spdk_nvme_identify/identify.o 00:24:52.888 TEST_HEADER include/spdk/accel.h 00:24:52.888 TEST_HEADER include/spdk/accel_module.h 00:24:52.888 TEST_HEADER include/spdk/assert.h 00:24:52.888 CC app/spdk_nvme_discover/discovery_aer.o 00:24:52.888 TEST_HEADER include/spdk/barrier.h 00:24:52.888 TEST_HEADER include/spdk/base64.h 00:24:52.888 TEST_HEADER include/spdk/bdev.h 00:24:52.888 TEST_HEADER include/spdk/bdev_module.h 00:24:52.888 TEST_HEADER include/spdk/bdev_zone.h 00:24:52.888 TEST_HEADER include/spdk/bit_array.h 00:24:52.888 TEST_HEADER include/spdk/bit_pool.h 00:24:52.888 TEST_HEADER include/spdk/blob_bdev.h 00:24:52.888 TEST_HEADER include/spdk/blobfs_bdev.h 00:24:52.888 TEST_HEADER include/spdk/blobfs.h 00:24:52.888 TEST_HEADER include/spdk/blob.h 00:24:52.888 TEST_HEADER include/spdk/conf.h 00:24:52.888 TEST_HEADER include/spdk/config.h 00:24:52.888 TEST_HEADER include/spdk/cpuset.h 00:24:52.888 TEST_HEADER include/spdk/crc16.h 00:24:52.888 TEST_HEADER include/spdk/crc32.h 00:24:52.888 TEST_HEADER include/spdk/crc64.h 00:24:52.888 TEST_HEADER include/spdk/dif.h 00:24:52.888 TEST_HEADER include/spdk/dma.h 00:24:52.888 TEST_HEADER include/spdk/endian.h 00:24:52.888 TEST_HEADER include/spdk/env_dpdk.h 00:24:52.888 TEST_HEADER include/spdk/env.h 00:24:52.888 LINK spdk_trace 00:24:52.888 CC test/dma/test_dma/test_dma.o 00:24:52.888 TEST_HEADER include/spdk/event.h 00:24:52.888 TEST_HEADER include/spdk/fd_group.h 00:24:52.888 TEST_HEADER include/spdk/fd.h 00:24:52.888 TEST_HEADER include/spdk/file.h 00:24:52.888 TEST_HEADER include/spdk/fsdev.h 00:24:52.888 TEST_HEADER include/spdk/fsdev_module.h 00:24:52.888 TEST_HEADER include/spdk/ftl.h 00:24:52.888 TEST_HEADER include/spdk/fuse_dispatcher.h 00:24:52.888 TEST_HEADER include/spdk/gpt_spec.h 00:24:52.888 TEST_HEADER include/spdk/hexlify.h 00:24:52.888 TEST_HEADER include/spdk/histogram_data.h 00:24:52.888 TEST_HEADER include/spdk/idxd.h 00:24:52.888 TEST_HEADER include/spdk/idxd_spec.h 00:24:52.888 TEST_HEADER include/spdk/init.h 00:24:52.888 TEST_HEADER include/spdk/ioat.h 00:24:52.888 TEST_HEADER include/spdk/ioat_spec.h 00:24:52.888 TEST_HEADER include/spdk/iscsi_spec.h 00:24:52.888 TEST_HEADER include/spdk/json.h 00:24:52.888 TEST_HEADER include/spdk/jsonrpc.h 00:24:52.888 TEST_HEADER include/spdk/keyring.h 00:24:52.888 TEST_HEADER include/spdk/keyring_module.h 00:24:52.888 TEST_HEADER include/spdk/likely.h 00:24:52.888 TEST_HEADER include/spdk/log.h 00:24:52.888 TEST_HEADER include/spdk/lvol.h 00:24:52.888 TEST_HEADER include/spdk/md5.h 00:24:52.888 TEST_HEADER include/spdk/memory.h 00:24:52.888 TEST_HEADER include/spdk/mmio.h 00:24:52.888 TEST_HEADER include/spdk/nbd.h 00:24:52.888 TEST_HEADER include/spdk/net.h 00:24:52.888 TEST_HEADER include/spdk/notify.h 00:24:52.888 TEST_HEADER include/spdk/nvme.h 00:24:52.888 TEST_HEADER include/spdk/nvme_intel.h 00:24:53.147 TEST_HEADER include/spdk/nvme_ocssd.h 00:24:53.147 CC test/app/bdev_svc/bdev_svc.o 00:24:53.147 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:24:53.147 TEST_HEADER include/spdk/nvme_spec.h 00:24:53.147 TEST_HEADER include/spdk/nvme_zns.h 00:24:53.147 TEST_HEADER include/spdk/nvmf_cmd.h 00:24:53.147 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:24:53.147 TEST_HEADER include/spdk/nvmf.h 00:24:53.147 TEST_HEADER include/spdk/nvmf_spec.h 00:24:53.147 TEST_HEADER include/spdk/nvmf_transport.h 00:24:53.147 TEST_HEADER include/spdk/opal.h 00:24:53.147 TEST_HEADER include/spdk/opal_spec.h 00:24:53.147 TEST_HEADER include/spdk/pci_ids.h 00:24:53.147 TEST_HEADER include/spdk/pipe.h 00:24:53.147 TEST_HEADER include/spdk/queue.h 00:24:53.147 TEST_HEADER include/spdk/reduce.h 00:24:53.147 TEST_HEADER include/spdk/rpc.h 00:24:53.147 TEST_HEADER include/spdk/scheduler.h 00:24:53.147 TEST_HEADER include/spdk/scsi.h 00:24:53.147 TEST_HEADER include/spdk/scsi_spec.h 00:24:53.147 TEST_HEADER include/spdk/sock.h 00:24:53.147 TEST_HEADER include/spdk/stdinc.h 00:24:53.147 TEST_HEADER include/spdk/string.h 00:24:53.147 TEST_HEADER include/spdk/thread.h 00:24:53.147 TEST_HEADER include/spdk/trace.h 00:24:53.147 TEST_HEADER include/spdk/trace_parser.h 00:24:53.147 TEST_HEADER include/spdk/tree.h 00:24:53.147 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:24:53.147 TEST_HEADER include/spdk/ublk.h 00:24:53.147 TEST_HEADER include/spdk/util.h 00:24:53.147 TEST_HEADER include/spdk/uuid.h 00:24:53.147 TEST_HEADER include/spdk/version.h 00:24:53.147 TEST_HEADER include/spdk/vfio_user_pci.h 00:24:53.147 TEST_HEADER include/spdk/vfio_user_spec.h 00:24:53.147 TEST_HEADER include/spdk/vhost.h 00:24:53.147 TEST_HEADER include/spdk/vmd.h 00:24:53.147 TEST_HEADER include/spdk/xor.h 00:24:53.147 TEST_HEADER include/spdk/zipf.h 00:24:53.147 CXX test/cpp_headers/accel.o 00:24:53.147 CC test/env/mem_callbacks/mem_callbacks.o 00:24:53.147 LINK verify 00:24:53.147 LINK spdk_nvme_discover 00:24:53.147 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:24:53.147 LINK bdev_svc 00:24:53.406 CXX test/cpp_headers/accel_module.o 00:24:53.407 CC examples/thread/thread/thread_ex.o 00:24:53.407 CXX test/cpp_headers/assert.o 00:24:53.665 LINK nvme_fuzz 00:24:53.665 CC examples/sock/hello_world/hello_sock.o 00:24:53.665 LINK test_dma 00:24:53.665 CC examples/vmd/lsvmd/lsvmd.o 00:24:53.665 CXX test/cpp_headers/barrier.o 00:24:53.665 LINK thread 00:24:53.665 LINK mem_callbacks 00:24:53.665 LINK lsvmd 00:24:53.924 CXX test/cpp_headers/base64.o 00:24:53.924 LINK hello_sock 00:24:53.924 CC test/app/histogram_perf/histogram_perf.o 00:24:53.924 CC examples/idxd/perf/perf.o 00:24:53.924 LINK spdk_nvme_identify 00:24:53.924 CC test/env/vtophys/vtophys.o 00:24:53.924 CXX test/cpp_headers/bdev.o 00:24:53.924 LINK spdk_nvme_perf 00:24:53.924 CXX test/cpp_headers/bdev_module.o 00:24:54.183 LINK histogram_perf 00:24:54.183 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:24:54.183 CC examples/vmd/led/led.o 00:24:54.183 LINK vtophys 00:24:54.183 CXX test/cpp_headers/bdev_zone.o 00:24:54.183 LINK env_dpdk_post_init 00:24:54.441 LINK led 00:24:54.441 CC test/app/jsoncat/jsoncat.o 00:24:54.441 CXX test/cpp_headers/bit_array.o 00:24:54.441 CC test/app/stub/stub.o 00:24:54.441 CC app/spdk_top/spdk_top.o 00:24:54.441 LINK idxd_perf 00:24:54.441 CC test/event/event_perf/event_perf.o 00:24:54.441 LINK jsoncat 00:24:54.441 CXX test/cpp_headers/bit_pool.o 00:24:54.441 CC app/vhost/vhost.o 00:24:54.441 CXX test/cpp_headers/blob_bdev.o 00:24:54.700 LINK stub 00:24:54.700 CC test/env/memory/memory_ut.o 00:24:54.700 LINK event_perf 00:24:54.700 CC examples/accel/perf/accel_perf.o 00:24:54.700 CXX test/cpp_headers/blobfs_bdev.o 00:24:54.700 LINK vhost 00:24:54.700 CXX test/cpp_headers/blobfs.o 00:24:54.700 CC app/spdk_dd/spdk_dd.o 00:24:54.700 CC test/nvme/aer/aer.o 00:24:54.959 CC test/event/reactor/reactor.o 00:24:54.959 CXX test/cpp_headers/blob.o 00:24:54.959 CXX test/cpp_headers/conf.o 00:24:54.959 LINK reactor 00:24:54.959 CC test/event/reactor_perf/reactor_perf.o 00:24:55.218 CXX test/cpp_headers/config.o 00:24:55.218 LINK aer 00:24:55.218 CXX test/cpp_headers/cpuset.o 00:24:55.218 LINK reactor_perf 00:24:55.218 CXX test/cpp_headers/crc16.o 00:24:55.218 LINK spdk_dd 00:24:55.218 CC app/fio/nvme/fio_plugin.o 00:24:55.218 LINK iscsi_fuzz 00:24:55.477 LINK accel_perf 00:24:55.477 CXX test/cpp_headers/crc32.o 00:24:55.477 CC test/nvme/reset/reset.o 00:24:55.477 CC test/event/app_repeat/app_repeat.o 00:24:55.477 LINK spdk_top 00:24:55.477 CC test/event/scheduler/scheduler.o 00:24:55.477 CXX test/cpp_headers/crc64.o 00:24:55.737 CC test/nvme/sgl/sgl.o 00:24:55.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:24:55.737 LINK app_repeat 00:24:55.737 LINK reset 00:24:55.737 CXX test/cpp_headers/dif.o 00:24:55.737 CXX test/cpp_headers/dma.o 00:24:55.737 CC examples/blob/hello_world/hello_blob.o 00:24:55.737 LINK scheduler 00:24:55.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:24:55.996 CXX test/cpp_headers/endian.o 00:24:55.996 CXX test/cpp_headers/env_dpdk.o 00:24:55.996 LINK memory_ut 00:24:55.996 CC test/rpc_client/rpc_client_test.o 00:24:55.996 LINK sgl 00:24:55.996 LINK spdk_nvme 00:24:55.996 CXX test/cpp_headers/env.o 00:24:55.996 CC examples/nvme/hello_world/hello_world.o 00:24:55.996 LINK hello_blob 00:24:56.256 LINK rpc_client_test 00:24:56.256 CC app/fio/bdev/fio_plugin.o 00:24:56.256 CXX test/cpp_headers/event.o 00:24:56.256 CC test/env/pci/pci_ut.o 00:24:56.256 CC examples/fsdev/hello_world/hello_fsdev.o 00:24:56.256 CC test/nvme/e2edp/nvme_dp.o 00:24:56.256 LINK vhost_fuzz 00:24:56.256 LINK hello_world 00:24:56.256 CXX test/cpp_headers/fd_group.o 00:24:56.256 CC examples/bdev/hello_world/hello_bdev.o 00:24:56.516 CC examples/blob/cli/blobcli.o 00:24:56.516 CXX test/cpp_headers/fd.o 00:24:56.516 LINK hello_fsdev 00:24:56.516 CC examples/bdev/bdevperf/bdevperf.o 00:24:56.516 LINK nvme_dp 00:24:56.516 CC examples/nvme/reconnect/reconnect.o 00:24:56.516 LINK hello_bdev 00:24:56.775 CXX test/cpp_headers/file.o 00:24:56.775 LINK pci_ut 00:24:56.775 CXX test/cpp_headers/fsdev.o 00:24:56.775 CC test/accel/dif/dif.o 00:24:56.775 LINK spdk_bdev 00:24:56.775 CC test/nvme/overhead/overhead.o 00:24:57.034 CXX test/cpp_headers/fsdev_module.o 00:24:57.034 LINK blobcli 00:24:57.034 CC test/nvme/err_injection/err_injection.o 00:24:57.034 CC test/nvme/startup/startup.o 00:24:57.034 LINK reconnect 00:24:57.034 CC test/nvme/reserve/reserve.o 00:24:57.034 CC test/nvme/simple_copy/simple_copy.o 00:24:57.034 CXX test/cpp_headers/ftl.o 00:24:57.034 CXX test/cpp_headers/fuse_dispatcher.o 00:24:57.293 LINK overhead 00:24:57.293 LINK err_injection 00:24:57.293 LINK startup 00:24:57.293 LINK reserve 00:24:57.293 CC examples/nvme/nvme_manage/nvme_manage.o 00:24:57.293 LINK simple_copy 00:24:57.293 CXX test/cpp_headers/gpt_spec.o 00:24:57.293 CXX test/cpp_headers/hexlify.o 00:24:57.293 CC examples/nvme/arbitration/arbitration.o 00:24:57.293 CXX test/cpp_headers/histogram_data.o 00:24:57.553 CXX test/cpp_headers/idxd.o 00:24:57.553 CC test/nvme/connect_stress/connect_stress.o 00:24:57.553 CC test/nvme/boot_partition/boot_partition.o 00:24:57.553 LINK dif 00:24:57.553 CC examples/nvme/hotplug/hotplug.o 00:24:57.553 CC examples/nvme/cmb_copy/cmb_copy.o 00:24:57.553 CC examples/nvme/abort/abort.o 00:24:57.812 CXX test/cpp_headers/idxd_spec.o 00:24:57.812 LINK bdevperf 00:24:57.812 LINK boot_partition 00:24:57.812 LINK connect_stress 00:24:57.812 LINK arbitration 00:24:57.812 LINK cmb_copy 00:24:57.812 CXX test/cpp_headers/init.o 00:24:57.813 CXX test/cpp_headers/ioat.o 00:24:57.813 LINK nvme_manage 00:24:57.813 LINK hotplug 00:24:58.072 CC test/nvme/compliance/nvme_compliance.o 00:24:58.072 CXX test/cpp_headers/ioat_spec.o 00:24:58.072 CXX test/cpp_headers/iscsi_spec.o 00:24:58.072 CXX test/cpp_headers/json.o 00:24:58.072 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:24:58.072 LINK abort 00:24:58.072 CXX test/cpp_headers/jsonrpc.o 00:24:58.072 CC test/nvme/fused_ordering/fused_ordering.o 00:24:58.072 CC test/blobfs/mkfs/mkfs.o 00:24:58.332 CXX test/cpp_headers/keyring.o 00:24:58.332 CC test/nvme/doorbell_aers/doorbell_aers.o 00:24:58.332 LINK pmr_persistence 00:24:58.332 CXX test/cpp_headers/keyring_module.o 00:24:58.332 CC test/bdev/bdevio/bdevio.o 00:24:58.332 CC test/lvol/esnap/esnap.o 00:24:58.332 LINK nvme_compliance 00:24:58.332 LINK mkfs 00:24:58.332 CC test/nvme/fdp/fdp.o 00:24:58.332 CXX test/cpp_headers/likely.o 00:24:58.332 LINK fused_ordering 00:24:58.591 LINK doorbell_aers 00:24:58.591 CC test/nvme/cuse/cuse.o 00:24:58.591 CXX test/cpp_headers/log.o 00:24:58.591 CXX test/cpp_headers/lvol.o 00:24:58.591 CXX test/cpp_headers/md5.o 00:24:58.591 CXX test/cpp_headers/memory.o 00:24:58.591 CC examples/nvmf/nvmf/nvmf.o 00:24:58.850 CXX test/cpp_headers/mmio.o 00:24:58.850 CXX test/cpp_headers/nbd.o 00:24:58.850 CXX test/cpp_headers/net.o 00:24:58.850 LINK bdevio 00:24:58.850 CXX test/cpp_headers/notify.o 00:24:58.850 CXX test/cpp_headers/nvme.o 00:24:58.850 LINK fdp 00:24:58.850 CXX test/cpp_headers/nvme_intel.o 00:24:58.850 CXX test/cpp_headers/nvme_ocssd.o 00:24:58.850 CXX test/cpp_headers/nvme_ocssd_spec.o 00:24:58.850 CXX test/cpp_headers/nvme_spec.o 00:24:58.850 CXX test/cpp_headers/nvme_zns.o 00:24:59.109 CXX test/cpp_headers/nvmf_cmd.o 00:24:59.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:24:59.109 LINK nvmf 00:24:59.109 CXX test/cpp_headers/nvmf.o 00:24:59.109 CXX test/cpp_headers/nvmf_spec.o 00:24:59.109 CXX test/cpp_headers/nvmf_transport.o 00:24:59.109 CXX test/cpp_headers/opal.o 00:24:59.109 CXX test/cpp_headers/opal_spec.o 00:24:59.109 CXX test/cpp_headers/pci_ids.o 00:24:59.368 CXX test/cpp_headers/pipe.o 00:24:59.368 CXX test/cpp_headers/queue.o 00:24:59.368 CXX test/cpp_headers/reduce.o 00:24:59.368 CXX test/cpp_headers/rpc.o 00:24:59.368 CXX test/cpp_headers/scheduler.o 00:24:59.368 CXX test/cpp_headers/scsi.o 00:24:59.368 CXX test/cpp_headers/scsi_spec.o 00:24:59.368 CXX test/cpp_headers/sock.o 00:24:59.368 CXX test/cpp_headers/stdinc.o 00:24:59.368 CXX test/cpp_headers/string.o 00:24:59.368 CXX test/cpp_headers/thread.o 00:24:59.368 CXX test/cpp_headers/trace.o 00:24:59.368 CXX test/cpp_headers/trace_parser.o 00:24:59.627 CXX test/cpp_headers/tree.o 00:24:59.627 CXX test/cpp_headers/ublk.o 00:24:59.627 CXX test/cpp_headers/util.o 00:24:59.627 CXX test/cpp_headers/uuid.o 00:24:59.627 CXX test/cpp_headers/version.o 00:24:59.627 CXX test/cpp_headers/vfio_user_pci.o 00:24:59.627 CXX test/cpp_headers/vfio_user_spec.o 00:24:59.627 CXX test/cpp_headers/vhost.o 00:24:59.627 CXX test/cpp_headers/vmd.o 00:24:59.627 CXX test/cpp_headers/xor.o 00:24:59.627 CXX test/cpp_headers/zipf.o 00:24:59.887 LINK cuse 00:25:05.165 LINK esnap 00:25:05.165 00:25:05.165 real 1m31.019s 00:25:05.165 user 8m1.082s 00:25:05.165 sys 1m44.392s 00:25:05.165 17:21:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:25:05.165 ************************************ 00:25:05.165 END TEST make 00:25:05.165 ************************************ 00:25:05.165 17:21:42 make -- common/autotest_common.sh@10 -- $ set +x 00:25:05.165 17:21:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:25:05.165 17:21:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:05.165 17:21:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:05.165 17:21:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:05.165 17:21:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:05.165 17:21:42 -- pm/common@44 -- $ pid=5503 00:25:05.165 17:21:42 -- pm/common@50 -- $ kill -TERM 5503 00:25:05.165 17:21:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:05.165 17:21:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:05.165 17:21:42 -- pm/common@44 -- $ pid=5505 00:25:05.165 17:21:42 -- pm/common@50 -- $ kill -TERM 5505 00:25:05.165 17:21:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:25:05.165 17:21:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:05.165 17:21:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.165 17:21:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.165 17:21:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:05.426 17:21:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:05.426 17:21:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.426 17:21:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.426 17:21:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.426 17:21:42 -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.426 17:21:42 -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.426 17:21:42 -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.426 17:21:42 -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.426 17:21:42 -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.426 17:21:42 -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.426 17:21:42 -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.426 17:21:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.426 17:21:42 -- scripts/common.sh@344 -- # case "$op" in 00:25:05.426 17:21:42 -- scripts/common.sh@345 -- # : 1 00:25:05.426 17:21:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.426 17:21:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.426 17:21:42 -- scripts/common.sh@365 -- # decimal 1 00:25:05.426 17:21:42 -- scripts/common.sh@353 -- # local d=1 00:25:05.426 17:21:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.426 17:21:42 -- scripts/common.sh@355 -- # echo 1 00:25:05.426 17:21:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.426 17:21:42 -- scripts/common.sh@366 -- # decimal 2 00:25:05.426 17:21:42 -- scripts/common.sh@353 -- # local d=2 00:25:05.426 17:21:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.426 17:21:42 -- scripts/common.sh@355 -- # echo 2 00:25:05.426 17:21:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.426 17:21:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.426 17:21:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.426 17:21:42 -- scripts/common.sh@368 -- # return 0 00:25:05.426 17:21:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.426 17:21:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:05.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.426 --rc genhtml_branch_coverage=1 00:25:05.426 --rc genhtml_function_coverage=1 00:25:05.426 --rc genhtml_legend=1 00:25:05.426 --rc geninfo_all_blocks=1 00:25:05.426 --rc geninfo_unexecuted_blocks=1 00:25:05.426 00:25:05.426 ' 00:25:05.426 17:21:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:05.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.426 --rc genhtml_branch_coverage=1 00:25:05.426 --rc genhtml_function_coverage=1 00:25:05.426 --rc genhtml_legend=1 00:25:05.426 --rc geninfo_all_blocks=1 00:25:05.426 --rc geninfo_unexecuted_blocks=1 00:25:05.426 00:25:05.426 ' 00:25:05.426 17:21:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:05.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.426 --rc genhtml_branch_coverage=1 00:25:05.426 --rc genhtml_function_coverage=1 00:25:05.426 --rc genhtml_legend=1 00:25:05.426 --rc geninfo_all_blocks=1 00:25:05.426 --rc geninfo_unexecuted_blocks=1 00:25:05.426 00:25:05.426 ' 00:25:05.426 17:21:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:05.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.426 --rc genhtml_branch_coverage=1 00:25:05.426 --rc genhtml_function_coverage=1 00:25:05.426 --rc genhtml_legend=1 00:25:05.426 --rc geninfo_all_blocks=1 00:25:05.426 --rc geninfo_unexecuted_blocks=1 00:25:05.426 00:25:05.426 ' 00:25:05.426 17:21:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:05.426 17:21:42 -- nvmf/common.sh@7 -- # uname -s 00:25:05.426 17:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.426 17:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.426 17:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.426 17:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.426 17:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.426 17:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.426 17:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.426 17:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.426 17:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.426 17:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.426 17:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1430afe-7853-490a-a832-69c50badaf60 00:25:05.426 17:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=e1430afe-7853-490a-a832-69c50badaf60 00:25:05.426 17:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.426 17:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.426 17:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:05.426 17:21:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.426 17:21:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:05.426 17:21:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.426 17:21:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.426 17:21:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.426 17:21:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.426 17:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.426 17:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.426 17:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.426 17:21:42 -- paths/export.sh@5 -- # export PATH 00:25:05.426 17:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.426 17:21:42 -- nvmf/common.sh@51 -- # : 0 00:25:05.426 17:21:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.426 17:21:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.426 17:21:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.426 17:21:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.426 17:21:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.426 17:21:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.426 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.426 17:21:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.426 17:21:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.426 17:21:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.426 17:21:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:25:05.426 17:21:42 -- spdk/autotest.sh@32 -- # uname -s 00:25:05.426 17:21:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:25:05.426 17:21:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:25:05.426 17:21:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:25:05.426 17:21:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:25:05.426 17:21:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:25:05.426 17:21:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:25:05.426 17:21:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:25:05.426 17:21:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:25:05.426 17:21:42 -- spdk/autotest.sh@48 -- # udevadm_pid=55044 00:25:05.426 17:21:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:25:05.426 17:21:42 -- pm/common@17 -- # local monitor 00:25:05.426 17:21:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:25:05.426 17:21:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:25:05.426 17:21:42 -- pm/common@25 -- # sleep 1 00:25:05.426 17:21:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:25:05.426 17:21:42 -- pm/common@21 -- # date +%s 00:25:05.426 17:21:42 -- pm/common@21 -- # date +%s 00:25:05.426 17:21:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641702 00:25:05.426 17:21:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641702 00:25:05.426 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641702_collect-vmstat.pm.log 00:25:05.426 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641702_collect-cpu-load.pm.log 00:25:06.365 17:21:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:25:06.365 17:21:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:25:06.365 17:21:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.365 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.365 17:21:43 -- spdk/autotest.sh@59 -- # create_test_list 00:25:06.365 17:21:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:25:06.365 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.625 17:21:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:25:06.625 17:21:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:25:06.625 17:21:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:25:06.625 17:21:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:25:06.625 17:21:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:25:06.625 17:21:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:25:06.625 17:21:43 -- common/autotest_common.sh@1457 -- # uname 00:25:06.625 17:21:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:25:06.625 17:21:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:25:06.625 17:21:43 -- common/autotest_common.sh@1477 -- # uname 00:25:06.625 17:21:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:25:06.625 17:21:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:25:06.625 17:21:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:25:06.625 lcov: LCOV version 1.15 00:25:06.625 17:21:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:25:24.721 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:25:24.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:25:39.611 17:22:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:25:39.611 17:22:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.611 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:39.611 17:22:15 -- spdk/autotest.sh@78 -- # rm -f 00:25:39.611 17:22:15 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:39.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:39.611 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:39.611 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:39.611 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:39.611 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:39.611 17:22:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:25:39.611 17:22:16 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:25:39.611 17:22:16 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:25:39.611 17:22:16 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:25:39.611 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.611 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:25:39.611 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:39.611 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.611 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:25:39.611 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:39.611 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.611 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:25:39.611 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:25:39.611 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.611 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:25:39.611 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:25:39.611 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.611 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:25:39.611 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:25:39.611 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.611 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.612 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:25:39.612 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:25:39.612 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:25:39.612 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.612 17:22:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:39.612 17:22:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:25:39.612 17:22:16 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:25:39.612 17:22:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:25:39.612 17:22:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.612 17:22:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:25:39.612 17:22:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:39.612 17:22:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:39.612 17:22:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:25:39.612 17:22:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:25:39.612 17:22:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:25:39.612 No valid GPT data, bailing 00:25:39.612 17:22:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:39.871 17:22:17 -- scripts/common.sh@394 -- # pt= 00:25:39.871 17:22:17 -- scripts/common.sh@395 -- # return 1 00:25:39.871 17:22:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:25:39.871 1+0 records in 00:25:39.871 1+0 records out 00:25:39.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163545 s, 64.1 MB/s 00:25:39.871 17:22:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:39.871 17:22:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:39.871 17:22:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:25:39.871 17:22:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:25:39.871 17:22:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:25:39.871 No valid GPT data, bailing 00:25:39.871 17:22:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:39.871 17:22:17 -- scripts/common.sh@394 -- # pt= 00:25:39.871 17:22:17 -- scripts/common.sh@395 -- # return 1 00:25:39.871 17:22:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:25:39.871 1+0 records in 00:25:39.871 1+0 records out 00:25:39.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048164 s, 218 MB/s 00:25:39.871 17:22:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:39.871 17:22:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:39.871 17:22:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:25:39.871 17:22:17 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:25:39.871 17:22:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:25:39.871 No valid GPT data, bailing 00:25:39.871 17:22:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:25:39.871 17:22:17 -- scripts/common.sh@394 -- # pt= 00:25:39.871 17:22:17 -- scripts/common.sh@395 -- # return 1 00:25:39.871 17:22:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:25:39.871 1+0 records in 00:25:39.871 1+0 records out 00:25:39.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469296 s, 223 MB/s 00:25:39.871 17:22:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:39.871 17:22:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:39.871 17:22:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:25:39.871 17:22:17 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:25:39.871 17:22:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:25:39.871 No valid GPT data, bailing 00:25:39.871 17:22:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:25:40.131 17:22:17 -- scripts/common.sh@394 -- # pt= 00:25:40.131 17:22:17 -- scripts/common.sh@395 -- # return 1 00:25:40.131 17:22:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:25:40.131 1+0 records in 00:25:40.131 1+0 records out 00:25:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00796833 s, 132 MB/s 00:25:40.131 17:22:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:40.131 17:22:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:40.131 17:22:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:25:40.131 17:22:17 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:25:40.131 17:22:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:25:40.131 No valid GPT data, bailing 00:25:40.131 17:22:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:25:40.131 17:22:17 -- scripts/common.sh@394 -- # pt= 00:25:40.131 17:22:17 -- scripts/common.sh@395 -- # return 1 00:25:40.131 17:22:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:25:40.131 1+0 records in 00:25:40.131 1+0 records out 00:25:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656278 s, 160 MB/s 00:25:40.131 17:22:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:40.131 17:22:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:40.131 17:22:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:25:40.131 17:22:17 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:25:40.131 17:22:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:25:40.131 No valid GPT data, bailing 00:25:40.131 17:22:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:25:40.131 17:22:17 -- scripts/common.sh@394 -- # pt= 00:25:40.131 17:22:17 -- scripts/common.sh@395 -- # return 1 00:25:40.131 17:22:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:25:40.131 1+0 records in 00:25:40.131 1+0 records out 00:25:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641442 s, 163 MB/s 00:25:40.131 17:22:17 -- spdk/autotest.sh@105 -- # sync 00:25:40.131 17:22:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:25:40.131 17:22:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:25:40.131 17:22:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:25:43.425 17:22:20 -- spdk/autotest.sh@111 -- # uname -s 00:25:43.425 17:22:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:25:43.425 17:22:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:25:43.425 17:22:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:43.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:44.265 Hugepages 00:25:44.265 node hugesize free / total 00:25:44.265 node0 1048576kB 0 / 0 00:25:44.265 node0 2048kB 0 / 0 00:25:44.265 00:25:44.265 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:44.265 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:44.535 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:44.535 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:25:44.797 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:25:44.797 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:25:44.797 17:22:22 -- spdk/autotest.sh@117 -- # uname -s 00:25:44.797 17:22:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:25:44.797 17:22:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:25:44.797 17:22:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:45.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:46.303 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:46.303 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:46.303 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:46.303 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:46.303 17:22:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:25:47.254 17:22:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:25:47.254 17:22:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:25:47.254 17:22:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:47.254 17:22:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:47.254 17:22:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:47.254 17:22:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:47.254 17:22:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:47.254 17:22:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:47.254 17:22:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:47.522 17:22:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:25:47.522 17:22:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:47.522 17:22:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:47.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:48.067 Waiting for block devices as requested 00:25:48.067 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:48.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:48.338 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:48.338 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:53.621 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:53.621 17:22:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:25:53.621 17:22:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:25:53.621 17:22:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:53.621 17:22:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:25:53.621 17:22:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:53.621 17:22:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:25:53.621 17:22:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:53.621 17:22:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:25:53.621 17:22:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:25:53.621 17:22:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:25:53.621 17:22:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:25:53.621 17:22:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:25:53.621 17:22:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:25:53.621 17:22:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:25:53.621 17:22:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:25:53.621 17:22:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:25:53.621 17:22:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:25:53.621 17:22:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:25:53.621 17:22:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:25:53.621 17:22:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:25:53.621 17:22:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:25:53.621 17:22:30 -- common/autotest_common.sh@1543 -- # continue 00:25:53.621 17:22:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:25:53.621 17:22:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:25:53.621 17:22:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:53.621 17:22:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:25:53.621 17:22:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:53.621 17:22:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:25:53.621 17:22:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:53.621 17:22:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:25:53.622 17:22:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:25:53.622 17:22:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:25:53.622 17:22:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:25:53.622 17:22:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:25:53.622 17:22:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1543 -- # continue 00:25:53.622 17:22:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:25:53.622 17:22:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:25:53.622 17:22:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:25:53.622 17:22:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:25:53.622 17:22:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:25:53.622 17:22:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:25:53.622 17:22:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1543 -- # continue 00:25:53.622 17:22:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:25:53.622 17:22:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:25:53.622 17:22:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:25:53.622 17:22:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:25:53.622 17:22:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:25:53.622 17:22:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:25:53.622 17:22:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:25:53.622 17:22:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:25:53.622 17:22:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:25:53.622 17:22:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:25:53.622 17:22:31 -- common/autotest_common.sh@1543 -- # continue 00:25:53.622 17:22:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:25:53.622 17:22:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.622 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:53.622 17:22:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:25:53.622 17:22:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.622 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:53.622 17:22:31 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:54.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:55.124 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:55.124 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:55.124 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:55.124 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:55.124 17:22:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:25:55.124 17:22:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.124 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:55.124 17:22:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:25:55.124 17:22:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:25:55.124 17:22:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:25:55.124 17:22:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:25:55.124 17:22:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:25:55.124 17:22:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:25:55.124 17:22:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:25:55.124 17:22:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:25:55.124 17:22:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:55.124 17:22:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:55.124 17:22:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:55.124 17:22:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:55.124 17:22:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:55.382 17:22:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:25:55.382 17:22:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:55.382 17:22:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:25:55.382 17:22:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:55.382 17:22:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:25:55.382 17:22:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:55.382 17:22:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:25:55.382 17:22:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:55.382 17:22:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:25:55.382 17:22:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:25:55.382 17:22:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:55.382 17:22:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:25:55.382 17:22:32 -- common/autotest_common.sh@1572 -- # return 0 00:25:55.382 17:22:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:25:55.382 17:22:32 -- common/autotest_common.sh@1580 -- # return 0 00:25:55.382 17:22:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:25:55.382 17:22:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:25:55.382 17:22:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:55.382 17:22:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:55.382 17:22:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:25:55.382 17:22:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.382 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:55.382 17:22:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:25:55.382 17:22:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:55.382 17:22:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:55.382 17:22:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.382 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:55.382 ************************************ 00:25:55.382 START TEST env 00:25:55.382 ************************************ 00:25:55.382 17:22:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:55.382 * Looking for test storage... 00:25:55.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1693 -- # lcov --version 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:55.640 17:22:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.640 17:22:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.640 17:22:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.640 17:22:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.640 17:22:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.640 17:22:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.640 17:22:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.640 17:22:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.640 17:22:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.640 17:22:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.640 17:22:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.640 17:22:32 env -- scripts/common.sh@344 -- # case "$op" in 00:25:55.640 17:22:32 env -- scripts/common.sh@345 -- # : 1 00:25:55.640 17:22:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.640 17:22:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.640 17:22:32 env -- scripts/common.sh@365 -- # decimal 1 00:25:55.640 17:22:32 env -- scripts/common.sh@353 -- # local d=1 00:25:55.640 17:22:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.640 17:22:32 env -- scripts/common.sh@355 -- # echo 1 00:25:55.640 17:22:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.640 17:22:32 env -- scripts/common.sh@366 -- # decimal 2 00:25:55.640 17:22:32 env -- scripts/common.sh@353 -- # local d=2 00:25:55.640 17:22:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.640 17:22:32 env -- scripts/common.sh@355 -- # echo 2 00:25:55.640 17:22:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.640 17:22:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.640 17:22:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.640 17:22:32 env -- scripts/common.sh@368 -- # return 0 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.640 --rc genhtml_branch_coverage=1 00:25:55.640 --rc genhtml_function_coverage=1 00:25:55.640 --rc genhtml_legend=1 00:25:55.640 --rc geninfo_all_blocks=1 00:25:55.640 --rc geninfo_unexecuted_blocks=1 00:25:55.640 00:25:55.640 ' 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.640 --rc genhtml_branch_coverage=1 00:25:55.640 --rc genhtml_function_coverage=1 00:25:55.640 --rc genhtml_legend=1 00:25:55.640 --rc geninfo_all_blocks=1 00:25:55.640 --rc geninfo_unexecuted_blocks=1 00:25:55.640 00:25:55.640 ' 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.640 --rc genhtml_branch_coverage=1 00:25:55.640 --rc genhtml_function_coverage=1 00:25:55.640 --rc genhtml_legend=1 00:25:55.640 --rc geninfo_all_blocks=1 00:25:55.640 --rc geninfo_unexecuted_blocks=1 00:25:55.640 00:25:55.640 ' 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.640 --rc genhtml_branch_coverage=1 00:25:55.640 --rc genhtml_function_coverage=1 00:25:55.640 --rc genhtml_legend=1 00:25:55.640 --rc geninfo_all_blocks=1 00:25:55.640 --rc geninfo_unexecuted_blocks=1 00:25:55.640 00:25:55.640 ' 00:25:55.640 17:22:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:55.640 17:22:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.640 17:22:32 env -- common/autotest_common.sh@10 -- # set +x 00:25:55.640 ************************************ 00:25:55.640 START TEST env_memory 00:25:55.640 ************************************ 00:25:55.640 17:22:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:55.640 00:25:55.640 00:25:55.640 CUnit - A unit testing framework for C - Version 2.1-3 00:25:55.640 http://cunit.sourceforge.net/ 00:25:55.640 00:25:55.640 00:25:55.640 Suite: memory 00:25:55.640 Test: alloc and free memory map ...[2024-11-26 17:22:33.038470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:25:55.640 passed 00:25:55.899 Test: mem map translation ...[2024-11-26 17:22:33.088422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:25:55.899 [2024-11-26 17:22:33.088492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:25:55.899 [2024-11-26 17:22:33.088554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:25:55.899 [2024-11-26 17:22:33.088575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:25:55.899 passed 00:25:55.899 Test: mem map registration ...[2024-11-26 17:22:33.158448] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:25:55.899 [2024-11-26 17:22:33.158529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:25:55.899 passed 00:25:55.899 Test: mem map adjacent registrations ...passed 00:25:55.899 00:25:55.899 Run Summary: Type Total Ran Passed Failed Inactive 00:25:55.899 suites 1 1 n/a 0 0 00:25:55.899 tests 4 4 4 0 0 00:25:55.899 asserts 152 152 152 0 n/a 00:25:55.899 00:25:55.899 Elapsed time = 0.272 seconds 00:25:55.899 00:25:55.899 real 0m0.327s 00:25:55.899 user 0m0.284s 00:25:55.899 sys 0m0.030s 00:25:55.899 17:22:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.899 17:22:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:25:55.899 ************************************ 00:25:55.899 END TEST env_memory 00:25:55.899 ************************************ 00:25:55.899 17:22:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:55.899 17:22:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:55.899 17:22:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.899 17:22:33 env -- common/autotest_common.sh@10 -- # set +x 00:25:56.157 ************************************ 00:25:56.157 START TEST env_vtophys 00:25:56.157 ************************************ 00:25:56.157 17:22:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:56.157 EAL: lib.eal log level changed from notice to debug 00:25:56.157 EAL: Detected lcore 0 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 1 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 2 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 3 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 4 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 5 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 6 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 7 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 8 as core 0 on socket 0 00:25:56.157 EAL: Detected lcore 9 as core 0 on socket 0 00:25:56.157 EAL: Maximum logical cores by configuration: 128 00:25:56.157 EAL: Detected CPU lcores: 10 00:25:56.157 EAL: Detected NUMA nodes: 1 00:25:56.157 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:25:56.157 EAL: Detected shared linkage of DPDK 00:25:56.157 EAL: No shared files mode enabled, IPC will be disabled 00:25:56.157 EAL: Selected IOVA mode 'PA' 00:25:56.157 EAL: Probing VFIO support... 00:25:56.157 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:56.157 EAL: VFIO modules not loaded, skipping VFIO support... 00:25:56.157 EAL: Ask a virtual area of 0x2e000 bytes 00:25:56.157 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:25:56.157 EAL: Setting up physically contiguous memory... 00:25:56.157 EAL: Setting maximum number of open files to 524288 00:25:56.157 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:25:56.157 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:25:56.157 EAL: Ask a virtual area of 0x61000 bytes 00:25:56.157 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:25:56.157 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:56.157 EAL: Ask a virtual area of 0x400000000 bytes 00:25:56.157 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:25:56.157 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:25:56.157 EAL: Ask a virtual area of 0x61000 bytes 00:25:56.157 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:25:56.157 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:56.157 EAL: Ask a virtual area of 0x400000000 bytes 00:25:56.157 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:25:56.157 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:25:56.157 EAL: Ask a virtual area of 0x61000 bytes 00:25:56.157 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:25:56.157 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:56.157 EAL: Ask a virtual area of 0x400000000 bytes 00:25:56.157 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:25:56.157 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:25:56.157 EAL: Ask a virtual area of 0x61000 bytes 00:25:56.158 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:25:56.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:56.158 EAL: Ask a virtual area of 0x400000000 bytes 00:25:56.158 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:25:56.158 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:25:56.158 EAL: Hugepages will be freed exactly as allocated. 00:25:56.158 EAL: No shared files mode enabled, IPC is disabled 00:25:56.158 EAL: No shared files mode enabled, IPC is disabled 00:25:56.158 EAL: TSC frequency is ~2290000 KHz 00:25:56.158 EAL: Main lcore 0 is ready (tid=7f3bd085ba40;cpuset=[0]) 00:25:56.158 EAL: Trying to obtain current memory policy. 00:25:56.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.158 EAL: Restoring previous memory policy: 0 00:25:56.158 EAL: request: mp_malloc_sync 00:25:56.158 EAL: No shared files mode enabled, IPC is disabled 00:25:56.158 EAL: Heap on socket 0 was expanded by 2MB 00:25:56.158 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:56.158 EAL: No PCI address specified using 'addr=' in: bus=pci 00:25:56.158 EAL: Mem event callback 'spdk:(nil)' registered 00:25:56.158 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:25:56.158 00:25:56.158 00:25:56.158 CUnit - A unit testing framework for C - Version 2.1-3 00:25:56.158 http://cunit.sourceforge.net/ 00:25:56.158 00:25:56.158 00:25:56.158 Suite: components_suite 00:25:56.722 Test: vtophys_malloc_test ...passed 00:25:56.722 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:25:56.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.722 EAL: Restoring previous memory policy: 4 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was expanded by 4MB 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was shrunk by 4MB 00:25:56.722 EAL: Trying to obtain current memory policy. 00:25:56.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.722 EAL: Restoring previous memory policy: 4 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was expanded by 6MB 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was shrunk by 6MB 00:25:56.722 EAL: Trying to obtain current memory policy. 00:25:56.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.722 EAL: Restoring previous memory policy: 4 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was expanded by 10MB 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was shrunk by 10MB 00:25:56.722 EAL: Trying to obtain current memory policy. 00:25:56.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.722 EAL: Restoring previous memory policy: 4 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was expanded by 18MB 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was shrunk by 18MB 00:25:56.722 EAL: Trying to obtain current memory policy. 00:25:56.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.722 EAL: Restoring previous memory policy: 4 00:25:56.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.722 EAL: request: mp_malloc_sync 00:25:56.722 EAL: No shared files mode enabled, IPC is disabled 00:25:56.722 EAL: Heap on socket 0 was expanded by 34MB 00:25:56.979 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.979 EAL: request: mp_malloc_sync 00:25:56.979 EAL: No shared files mode enabled, IPC is disabled 00:25:56.979 EAL: Heap on socket 0 was shrunk by 34MB 00:25:56.979 EAL: Trying to obtain current memory policy. 00:25:56.979 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:56.979 EAL: Restoring previous memory policy: 4 00:25:56.979 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.979 EAL: request: mp_malloc_sync 00:25:56.979 EAL: No shared files mode enabled, IPC is disabled 00:25:56.979 EAL: Heap on socket 0 was expanded by 66MB 00:25:56.979 EAL: Calling mem event callback 'spdk:(nil)' 00:25:56.979 EAL: request: mp_malloc_sync 00:25:56.979 EAL: No shared files mode enabled, IPC is disabled 00:25:56.979 EAL: Heap on socket 0 was shrunk by 66MB 00:25:57.244 EAL: Trying to obtain current memory policy. 00:25:57.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:57.244 EAL: Restoring previous memory policy: 4 00:25:57.244 EAL: Calling mem event callback 'spdk:(nil)' 00:25:57.244 EAL: request: mp_malloc_sync 00:25:57.244 EAL: No shared files mode enabled, IPC is disabled 00:25:57.244 EAL: Heap on socket 0 was expanded by 130MB 00:25:57.513 EAL: Calling mem event callback 'spdk:(nil)' 00:25:57.513 EAL: request: mp_malloc_sync 00:25:57.513 EAL: No shared files mode enabled, IPC is disabled 00:25:57.513 EAL: Heap on socket 0 was shrunk by 130MB 00:25:57.771 EAL: Trying to obtain current memory policy. 00:25:57.771 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:57.771 EAL: Restoring previous memory policy: 4 00:25:57.771 EAL: Calling mem event callback 'spdk:(nil)' 00:25:57.771 EAL: request: mp_malloc_sync 00:25:57.771 EAL: No shared files mode enabled, IPC is disabled 00:25:57.771 EAL: Heap on socket 0 was expanded by 258MB 00:25:58.338 EAL: Calling mem event callback 'spdk:(nil)' 00:25:58.338 EAL: request: mp_malloc_sync 00:25:58.338 EAL: No shared files mode enabled, IPC is disabled 00:25:58.338 EAL: Heap on socket 0 was shrunk by 258MB 00:25:58.904 EAL: Trying to obtain current memory policy. 00:25:58.904 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:58.904 EAL: Restoring previous memory policy: 4 00:25:58.904 EAL: Calling mem event callback 'spdk:(nil)' 00:25:58.904 EAL: request: mp_malloc_sync 00:25:58.904 EAL: No shared files mode enabled, IPC is disabled 00:25:58.904 EAL: Heap on socket 0 was expanded by 514MB 00:25:59.838 EAL: Calling mem event callback 'spdk:(nil)' 00:26:00.096 EAL: request: mp_malloc_sync 00:26:00.096 EAL: No shared files mode enabled, IPC is disabled 00:26:00.096 EAL: Heap on socket 0 was shrunk by 514MB 00:26:01.034 EAL: Trying to obtain current memory policy. 00:26:01.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:01.293 EAL: Restoring previous memory policy: 4 00:26:01.293 EAL: Calling mem event callback 'spdk:(nil)' 00:26:01.293 EAL: request: mp_malloc_sync 00:26:01.293 EAL: No shared files mode enabled, IPC is disabled 00:26:01.293 EAL: Heap on socket 0 was expanded by 1026MB 00:26:03.200 EAL: Calling mem event callback 'spdk:(nil)' 00:26:03.461 EAL: request: mp_malloc_sync 00:26:03.461 EAL: No shared files mode enabled, IPC is disabled 00:26:03.461 EAL: Heap on socket 0 was shrunk by 1026MB 00:26:05.370 passed 00:26:05.370 00:26:05.370 Run Summary: Type Total Ran Passed Failed Inactive 00:26:05.370 suites 1 1 n/a 0 0 00:26:05.370 tests 2 2 2 0 0 00:26:05.370 asserts 5740 5740 5740 0 n/a 00:26:05.370 00:26:05.370 Elapsed time = 9.069 seconds 00:26:05.370 EAL: Calling mem event callback 'spdk:(nil)' 00:26:05.370 EAL: request: mp_malloc_sync 00:26:05.370 EAL: No shared files mode enabled, IPC is disabled 00:26:05.370 EAL: Heap on socket 0 was shrunk by 2MB 00:26:05.370 EAL: No shared files mode enabled, IPC is disabled 00:26:05.370 EAL: No shared files mode enabled, IPC is disabled 00:26:05.370 EAL: No shared files mode enabled, IPC is disabled 00:26:05.370 00:26:05.370 real 0m9.421s 00:26:05.370 user 0m8.355s 00:26:05.370 sys 0m0.903s 00:26:05.370 17:22:42 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.370 17:22:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:26:05.370 ************************************ 00:26:05.370 END TEST env_vtophys 00:26:05.370 ************************************ 00:26:05.633 17:22:42 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:26:05.633 17:22:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:05.633 17:22:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.633 17:22:42 env -- common/autotest_common.sh@10 -- # set +x 00:26:05.633 ************************************ 00:26:05.633 START TEST env_pci 00:26:05.633 ************************************ 00:26:05.633 17:22:42 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:26:05.633 00:26:05.633 00:26:05.633 CUnit - A unit testing framework for C - Version 2.1-3 00:26:05.633 http://cunit.sourceforge.net/ 00:26:05.633 00:26:05.633 00:26:05.633 Suite: pci 00:26:05.633 Test: pci_hook ...[2024-11-26 17:22:42.890880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57905 has claimed it 00:26:05.633 passed 00:26:05.633 00:26:05.633 Run Summary: Type Total Ran Passed Failed Inactive 00:26:05.633 suites 1 1 n/a 0 0 00:26:05.633 tests 1 1 1 0 0 00:26:05.633 asserts 25 25 25 0 n/a 00:26:05.633 00:26:05.633 Elapsed time = 0.011 seconds 00:26:05.633 EAL: Cannot find device (10000:00:01.0) 00:26:05.633 EAL: Failed to attach device on primary process 00:26:05.633 00:26:05.633 real 0m0.098s 00:26:05.633 user 0m0.038s 00:26:05.633 sys 0m0.059s 00:26:05.633 17:22:42 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.633 17:22:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:26:05.633 ************************************ 00:26:05.633 END TEST env_pci 00:26:05.633 ************************************ 00:26:05.633 17:22:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:26:05.633 17:22:42 env -- env/env.sh@15 -- # uname 00:26:05.633 17:22:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:26:05.633 17:22:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:26:05.633 17:22:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:26:05.633 17:22:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:05.633 17:22:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.633 17:22:43 env -- common/autotest_common.sh@10 -- # set +x 00:26:05.633 ************************************ 00:26:05.633 START TEST env_dpdk_post_init 00:26:05.633 ************************************ 00:26:05.633 17:22:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:26:05.892 EAL: Detected CPU lcores: 10 00:26:05.892 EAL: Detected NUMA nodes: 1 00:26:05.892 EAL: Detected shared linkage of DPDK 00:26:05.892 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:26:05.892 EAL: Selected IOVA mode 'PA' 00:26:05.892 TELEMETRY: No legacy callbacks, legacy socket not created 00:26:05.892 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:26:05.892 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:26:05.892 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:26:05.892 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:26:05.892 Starting DPDK initialization... 00:26:05.892 Starting SPDK post initialization... 00:26:05.892 SPDK NVMe probe 00:26:05.892 Attaching to 0000:00:10.0 00:26:05.892 Attaching to 0000:00:11.0 00:26:05.892 Attaching to 0000:00:12.0 00:26:05.892 Attaching to 0000:00:13.0 00:26:05.892 Attached to 0000:00:10.0 00:26:05.892 Attached to 0000:00:11.0 00:26:05.892 Attached to 0000:00:13.0 00:26:05.892 Attached to 0000:00:12.0 00:26:05.892 Cleaning up... 00:26:05.892 00:26:05.892 real 0m0.314s 00:26:05.892 user 0m0.110s 00:26:05.892 sys 0m0.104s 00:26:05.892 17:22:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.892 ************************************ 00:26:05.892 END TEST env_dpdk_post_init 00:26:05.892 ************************************ 00:26:05.892 17:22:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.151 17:22:43 env -- env/env.sh@26 -- # uname 00:26:06.151 17:22:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:26:06.151 17:22:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:26:06.151 17:22:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:06.151 17:22:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.151 17:22:43 env -- common/autotest_common.sh@10 -- # set +x 00:26:06.151 ************************************ 00:26:06.151 START TEST env_mem_callbacks 00:26:06.151 ************************************ 00:26:06.151 17:22:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:26:06.151 EAL: Detected CPU lcores: 10 00:26:06.151 EAL: Detected NUMA nodes: 1 00:26:06.151 EAL: Detected shared linkage of DPDK 00:26:06.151 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:26:06.151 EAL: Selected IOVA mode 'PA' 00:26:06.151 TELEMETRY: No legacy callbacks, legacy socket not created 00:26:06.151 00:26:06.151 00:26:06.151 CUnit - A unit testing framework for C - Version 2.1-3 00:26:06.151 http://cunit.sourceforge.net/ 00:26:06.151 00:26:06.151 00:26:06.151 Suite: memory 00:26:06.151 Test: test ... 00:26:06.151 register 0x200000200000 2097152 00:26:06.151 malloc 3145728 00:26:06.151 register 0x200000400000 4194304 00:26:06.410 buf 0x2000004fffc0 len 3145728 PASSED 00:26:06.410 malloc 64 00:26:06.410 buf 0x2000004ffec0 len 64 PASSED 00:26:06.410 malloc 4194304 00:26:06.410 register 0x200000800000 6291456 00:26:06.410 buf 0x2000009fffc0 len 4194304 PASSED 00:26:06.410 free 0x2000004fffc0 3145728 00:26:06.410 free 0x2000004ffec0 64 00:26:06.410 unregister 0x200000400000 4194304 PASSED 00:26:06.410 free 0x2000009fffc0 4194304 00:26:06.410 unregister 0x200000800000 6291456 PASSED 00:26:06.410 malloc 8388608 00:26:06.410 register 0x200000400000 10485760 00:26:06.410 buf 0x2000005fffc0 len 8388608 PASSED 00:26:06.410 free 0x2000005fffc0 8388608 00:26:06.410 unregister 0x200000400000 10485760 PASSED 00:26:06.410 passed 00:26:06.410 00:26:06.410 Run Summary: Type Total Ran Passed Failed Inactive 00:26:06.410 suites 1 1 n/a 0 0 00:26:06.410 tests 1 1 1 0 0 00:26:06.410 asserts 15 15 15 0 n/a 00:26:06.410 00:26:06.410 Elapsed time = 0.090 seconds 00:26:06.410 00:26:06.410 real 0m0.301s 00:26:06.410 user 0m0.115s 00:26:06.410 sys 0m0.084s 00:26:06.410 17:22:43 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.410 ************************************ 00:26:06.410 17:22:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:26:06.410 END TEST env_mem_callbacks 00:26:06.410 ************************************ 00:26:06.410 00:26:06.410 real 0m11.051s 00:26:06.410 user 0m9.141s 00:26:06.410 sys 0m1.544s 00:26:06.410 17:22:43 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.410 17:22:43 env -- common/autotest_common.sh@10 -- # set +x 00:26:06.410 ************************************ 00:26:06.410 END TEST env 00:26:06.410 ************************************ 00:26:06.410 17:22:43 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:26:06.410 17:22:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:06.410 17:22:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.410 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:06.410 ************************************ 00:26:06.410 START TEST rpc 00:26:06.410 ************************************ 00:26:06.410 17:22:43 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:26:06.669 * Looking for test storage... 00:26:06.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:26:06.669 17:22:43 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:06.669 17:22:43 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:06.669 17:22:43 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:06.669 17:22:43 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.669 17:22:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.669 17:22:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.669 17:22:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.669 17:22:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.669 17:22:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.669 17:22:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:06.669 17:22:43 rpc -- scripts/common.sh@345 -- # : 1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.669 17:22:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.669 17:22:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@353 -- # local d=1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.669 17:22:43 rpc -- scripts/common.sh@355 -- # echo 1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.669 17:22:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@353 -- # local d=2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.669 17:22:43 rpc -- scripts/common.sh@355 -- # echo 2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.669 17:22:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.669 17:22:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.670 17:22:43 rpc -- scripts/common.sh@368 -- # return 0 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.670 --rc genhtml_branch_coverage=1 00:26:06.670 --rc genhtml_function_coverage=1 00:26:06.670 --rc genhtml_legend=1 00:26:06.670 --rc geninfo_all_blocks=1 00:26:06.670 --rc geninfo_unexecuted_blocks=1 00:26:06.670 00:26:06.670 ' 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.670 --rc genhtml_branch_coverage=1 00:26:06.670 --rc genhtml_function_coverage=1 00:26:06.670 --rc genhtml_legend=1 00:26:06.670 --rc geninfo_all_blocks=1 00:26:06.670 --rc geninfo_unexecuted_blocks=1 00:26:06.670 00:26:06.670 ' 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.670 --rc genhtml_branch_coverage=1 00:26:06.670 --rc genhtml_function_coverage=1 00:26:06.670 --rc genhtml_legend=1 00:26:06.670 --rc geninfo_all_blocks=1 00:26:06.670 --rc geninfo_unexecuted_blocks=1 00:26:06.670 00:26:06.670 ' 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.670 --rc genhtml_branch_coverage=1 00:26:06.670 --rc genhtml_function_coverage=1 00:26:06.670 --rc genhtml_legend=1 00:26:06.670 --rc geninfo_all_blocks=1 00:26:06.670 --rc geninfo_unexecuted_blocks=1 00:26:06.670 00:26:06.670 ' 00:26:06.670 17:22:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58032 00:26:06.670 17:22:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:06.670 17:22:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58032 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 58032 ']' 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.670 17:22:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.670 17:22:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.670 [2024-11-26 17:22:44.101992] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:06.670 [2024-11-26 17:22:44.102129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58032 ] 00:26:06.929 [2024-11-26 17:22:44.288665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.188 [2024-11-26 17:22:44.452022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:26:07.188 [2024-11-26 17:22:44.452090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58032' to capture a snapshot of events at runtime. 00:26:07.188 [2024-11-26 17:22:44.452102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.188 [2024-11-26 17:22:44.452113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.188 [2024-11-26 17:22:44.452122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58032 for offline analysis/debug. 00:26:07.188 [2024-11-26 17:22:44.453513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.136 17:22:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.136 17:22:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:26:08.136 17:22:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:26:08.136 17:22:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:26:08.136 17:22:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:26:08.136 17:22:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:26:08.136 17:22:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:08.137 17:22:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.137 17:22:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.137 ************************************ 00:26:08.137 START TEST rpc_integrity 00:26:08.137 ************************************ 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.137 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:26:08.137 { 00:26:08.137 "name": "Malloc0", 00:26:08.137 "aliases": [ 00:26:08.137 "b3b73e1b-7812-4b15-8c45-83974f647086" 00:26:08.137 ], 00:26:08.137 "product_name": "Malloc disk", 00:26:08.137 "block_size": 512, 00:26:08.137 "num_blocks": 16384, 00:26:08.137 "uuid": "b3b73e1b-7812-4b15-8c45-83974f647086", 00:26:08.137 "assigned_rate_limits": { 00:26:08.137 "rw_ios_per_sec": 0, 00:26:08.137 "rw_mbytes_per_sec": 0, 00:26:08.137 "r_mbytes_per_sec": 0, 00:26:08.137 "w_mbytes_per_sec": 0 00:26:08.137 }, 00:26:08.137 "claimed": false, 00:26:08.137 "zoned": false, 00:26:08.137 "supported_io_types": { 00:26:08.137 "read": true, 00:26:08.137 "write": true, 00:26:08.137 "unmap": true, 00:26:08.137 "flush": true, 00:26:08.137 "reset": true, 00:26:08.137 "nvme_admin": false, 00:26:08.137 "nvme_io": false, 00:26:08.137 "nvme_io_md": false, 00:26:08.137 "write_zeroes": true, 00:26:08.137 "zcopy": true, 00:26:08.137 "get_zone_info": false, 00:26:08.137 "zone_management": false, 00:26:08.137 "zone_append": false, 00:26:08.137 "compare": false, 00:26:08.137 "compare_and_write": false, 00:26:08.137 "abort": true, 00:26:08.137 "seek_hole": false, 00:26:08.137 "seek_data": false, 00:26:08.137 "copy": true, 00:26:08.137 "nvme_iov_md": false 00:26:08.137 }, 00:26:08.137 "memory_domains": [ 00:26:08.137 { 00:26:08.137 "dma_device_id": "system", 00:26:08.137 "dma_device_type": 1 00:26:08.137 }, 00:26:08.137 { 00:26:08.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.137 "dma_device_type": 2 00:26:08.137 } 00:26:08.137 ], 00:26:08.137 "driver_specific": {} 00:26:08.137 } 00:26:08.137 ]' 00:26:08.137 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.396 [2024-11-26 17:22:45.615767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:26:08.396 [2024-11-26 17:22:45.615855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.396 [2024-11-26 17:22:45.615907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:08.396 [2024-11-26 17:22:45.615921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.396 [2024-11-26 17:22:45.618480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.396 [2024-11-26 17:22:45.618534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:26:08.396 Passthru0 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:26:08.396 { 00:26:08.396 "name": "Malloc0", 00:26:08.396 "aliases": [ 00:26:08.396 "b3b73e1b-7812-4b15-8c45-83974f647086" 00:26:08.396 ], 00:26:08.396 "product_name": "Malloc disk", 00:26:08.396 "block_size": 512, 00:26:08.396 "num_blocks": 16384, 00:26:08.396 "uuid": "b3b73e1b-7812-4b15-8c45-83974f647086", 00:26:08.396 "assigned_rate_limits": { 00:26:08.396 "rw_ios_per_sec": 0, 00:26:08.396 "rw_mbytes_per_sec": 0, 00:26:08.396 "r_mbytes_per_sec": 0, 00:26:08.396 "w_mbytes_per_sec": 0 00:26:08.396 }, 00:26:08.396 "claimed": true, 00:26:08.396 "claim_type": "exclusive_write", 00:26:08.396 "zoned": false, 00:26:08.396 "supported_io_types": { 00:26:08.396 "read": true, 00:26:08.396 "write": true, 00:26:08.396 "unmap": true, 00:26:08.396 "flush": true, 00:26:08.396 "reset": true, 00:26:08.396 "nvme_admin": false, 00:26:08.396 "nvme_io": false, 00:26:08.396 "nvme_io_md": false, 00:26:08.396 "write_zeroes": true, 00:26:08.396 "zcopy": true, 00:26:08.396 "get_zone_info": false, 00:26:08.396 "zone_management": false, 00:26:08.396 "zone_append": false, 00:26:08.396 "compare": false, 00:26:08.396 "compare_and_write": false, 00:26:08.396 "abort": true, 00:26:08.396 "seek_hole": false, 00:26:08.396 "seek_data": false, 00:26:08.396 "copy": true, 00:26:08.396 "nvme_iov_md": false 00:26:08.396 }, 00:26:08.396 "memory_domains": [ 00:26:08.396 { 00:26:08.396 "dma_device_id": "system", 00:26:08.396 "dma_device_type": 1 00:26:08.396 }, 00:26:08.396 { 00:26:08.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.396 "dma_device_type": 2 00:26:08.396 } 00:26:08.396 ], 00:26:08.396 "driver_specific": {} 00:26:08.396 }, 00:26:08.396 { 00:26:08.396 "name": "Passthru0", 00:26:08.396 "aliases": [ 00:26:08.396 "7c8198e1-a87e-5260-ba1b-1432e84f2fe6" 00:26:08.396 ], 00:26:08.396 "product_name": "passthru", 00:26:08.396 "block_size": 512, 00:26:08.396 "num_blocks": 16384, 00:26:08.396 "uuid": "7c8198e1-a87e-5260-ba1b-1432e84f2fe6", 00:26:08.396 "assigned_rate_limits": { 00:26:08.396 "rw_ios_per_sec": 0, 00:26:08.396 "rw_mbytes_per_sec": 0, 00:26:08.396 "r_mbytes_per_sec": 0, 00:26:08.396 "w_mbytes_per_sec": 0 00:26:08.396 }, 00:26:08.396 "claimed": false, 00:26:08.396 "zoned": false, 00:26:08.396 "supported_io_types": { 00:26:08.396 "read": true, 00:26:08.396 "write": true, 00:26:08.396 "unmap": true, 00:26:08.396 "flush": true, 00:26:08.396 "reset": true, 00:26:08.396 "nvme_admin": false, 00:26:08.396 "nvme_io": false, 00:26:08.396 "nvme_io_md": false, 00:26:08.396 "write_zeroes": true, 00:26:08.396 "zcopy": true, 00:26:08.396 "get_zone_info": false, 00:26:08.396 "zone_management": false, 00:26:08.396 "zone_append": false, 00:26:08.396 "compare": false, 00:26:08.396 "compare_and_write": false, 00:26:08.396 "abort": true, 00:26:08.396 "seek_hole": false, 00:26:08.396 "seek_data": false, 00:26:08.396 "copy": true, 00:26:08.396 "nvme_iov_md": false 00:26:08.396 }, 00:26:08.396 "memory_domains": [ 00:26:08.396 { 00:26:08.396 "dma_device_id": "system", 00:26:08.396 "dma_device_type": 1 00:26:08.396 }, 00:26:08.396 { 00:26:08.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.396 "dma_device_type": 2 00:26:08.396 } 00:26:08.396 ], 00:26:08.396 "driver_specific": { 00:26:08.396 "passthru": { 00:26:08.396 "name": "Passthru0", 00:26:08.396 "base_bdev_name": "Malloc0" 00:26:08.396 } 00:26:08.396 } 00:26:08.396 } 00:26:08.396 ]' 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:26:08.396 17:22:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:26:08.396 00:26:08.396 real 0m0.387s 00:26:08.396 user 0m0.201s 00:26:08.396 sys 0m0.053s 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.396 17:22:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.396 ************************************ 00:26:08.396 END TEST rpc_integrity 00:26:08.396 ************************************ 00:26:08.656 17:22:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:26:08.656 17:22:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:08.656 17:22:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.656 17:22:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.656 ************************************ 00:26:08.656 START TEST rpc_plugins 00:26:08.656 ************************************ 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:26:08.656 { 00:26:08.656 "name": "Malloc1", 00:26:08.656 "aliases": [ 00:26:08.656 "357a5ab1-3116-4f1a-b369-988f8cb9d903" 00:26:08.656 ], 00:26:08.656 "product_name": "Malloc disk", 00:26:08.656 "block_size": 4096, 00:26:08.656 "num_blocks": 256, 00:26:08.656 "uuid": "357a5ab1-3116-4f1a-b369-988f8cb9d903", 00:26:08.656 "assigned_rate_limits": { 00:26:08.656 "rw_ios_per_sec": 0, 00:26:08.656 "rw_mbytes_per_sec": 0, 00:26:08.656 "r_mbytes_per_sec": 0, 00:26:08.656 "w_mbytes_per_sec": 0 00:26:08.656 }, 00:26:08.656 "claimed": false, 00:26:08.656 "zoned": false, 00:26:08.656 "supported_io_types": { 00:26:08.656 "read": true, 00:26:08.656 "write": true, 00:26:08.656 "unmap": true, 00:26:08.656 "flush": true, 00:26:08.656 "reset": true, 00:26:08.656 "nvme_admin": false, 00:26:08.656 "nvme_io": false, 00:26:08.656 "nvme_io_md": false, 00:26:08.656 "write_zeroes": true, 00:26:08.656 "zcopy": true, 00:26:08.656 "get_zone_info": false, 00:26:08.656 "zone_management": false, 00:26:08.656 "zone_append": false, 00:26:08.656 "compare": false, 00:26:08.656 "compare_and_write": false, 00:26:08.656 "abort": true, 00:26:08.656 "seek_hole": false, 00:26:08.656 "seek_data": false, 00:26:08.656 "copy": true, 00:26:08.656 "nvme_iov_md": false 00:26:08.656 }, 00:26:08.656 "memory_domains": [ 00:26:08.656 { 00:26:08.656 "dma_device_id": "system", 00:26:08.656 "dma_device_type": 1 00:26:08.656 }, 00:26:08.656 { 00:26:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.656 "dma_device_type": 2 00:26:08.656 } 00:26:08.656 ], 00:26:08.656 "driver_specific": {} 00:26:08.656 } 00:26:08.656 ]' 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:26:08.656 17:22:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.656 17:22:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.656 17:22:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.656 17:22:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:26:08.656 17:22:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.656 17:22:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.656 17:22:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.656 17:22:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:26:08.656 17:22:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:26:08.656 17:22:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:26:08.656 00:26:08.656 real 0m0.201s 00:26:08.656 user 0m0.116s 00:26:08.656 sys 0m0.031s 00:26:08.656 17:22:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.656 17:22:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.656 ************************************ 00:26:08.656 END TEST rpc_plugins 00:26:08.656 ************************************ 00:26:08.915 17:22:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:26:08.915 17:22:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:08.915 17:22:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.915 17:22:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.915 ************************************ 00:26:08.915 START TEST rpc_trace_cmd_test 00:26:08.915 ************************************ 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:26:08.915 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58032", 00:26:08.915 "tpoint_group_mask": "0x8", 00:26:08.915 "iscsi_conn": { 00:26:08.915 "mask": "0x2", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "scsi": { 00:26:08.915 "mask": "0x4", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "bdev": { 00:26:08.915 "mask": "0x8", 00:26:08.915 "tpoint_mask": "0xffffffffffffffff" 00:26:08.915 }, 00:26:08.915 "nvmf_rdma": { 00:26:08.915 "mask": "0x10", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "nvmf_tcp": { 00:26:08.915 "mask": "0x20", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "ftl": { 00:26:08.915 "mask": "0x40", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "blobfs": { 00:26:08.915 "mask": "0x80", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "dsa": { 00:26:08.915 "mask": "0x200", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "thread": { 00:26:08.915 "mask": "0x400", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "nvme_pcie": { 00:26:08.915 "mask": "0x800", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "iaa": { 00:26:08.915 "mask": "0x1000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "nvme_tcp": { 00:26:08.915 "mask": "0x2000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "bdev_nvme": { 00:26:08.915 "mask": "0x4000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "sock": { 00:26:08.915 "mask": "0x8000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "blob": { 00:26:08.915 "mask": "0x10000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "bdev_raid": { 00:26:08.915 "mask": "0x20000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 }, 00:26:08.915 "scheduler": { 00:26:08.915 "mask": "0x40000", 00:26:08.915 "tpoint_mask": "0x0" 00:26:08.915 } 00:26:08.915 }' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:26:08.915 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:26:09.176 17:22:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:26:09.176 00:26:09.176 real 0m0.256s 00:26:09.176 user 0m0.202s 00:26:09.176 sys 0m0.042s 00:26:09.176 17:22:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.176 17:22:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.176 ************************************ 00:26:09.176 END TEST rpc_trace_cmd_test 00:26:09.176 ************************************ 00:26:09.176 17:22:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:26:09.176 17:22:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:26:09.176 17:22:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:26:09.176 17:22:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.176 17:22:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.176 17:22:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:09.176 ************************************ 00:26:09.176 START TEST rpc_daemon_integrity 00:26:09.176 ************************************ 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.176 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:26:09.176 { 00:26:09.176 "name": "Malloc2", 00:26:09.176 "aliases": [ 00:26:09.176 "9b576e10-ef37-4e13-9637-21521f63baf7" 00:26:09.176 ], 00:26:09.176 "product_name": "Malloc disk", 00:26:09.176 "block_size": 512, 00:26:09.176 "num_blocks": 16384, 00:26:09.176 "uuid": "9b576e10-ef37-4e13-9637-21521f63baf7", 00:26:09.176 "assigned_rate_limits": { 00:26:09.176 "rw_ios_per_sec": 0, 00:26:09.176 "rw_mbytes_per_sec": 0, 00:26:09.176 "r_mbytes_per_sec": 0, 00:26:09.176 "w_mbytes_per_sec": 0 00:26:09.176 }, 00:26:09.176 "claimed": false, 00:26:09.176 "zoned": false, 00:26:09.176 "supported_io_types": { 00:26:09.176 "read": true, 00:26:09.176 "write": true, 00:26:09.176 "unmap": true, 00:26:09.176 "flush": true, 00:26:09.176 "reset": true, 00:26:09.176 "nvme_admin": false, 00:26:09.176 "nvme_io": false, 00:26:09.176 "nvme_io_md": false, 00:26:09.176 "write_zeroes": true, 00:26:09.176 "zcopy": true, 00:26:09.176 "get_zone_info": false, 00:26:09.176 "zone_management": false, 00:26:09.176 "zone_append": false, 00:26:09.176 "compare": false, 00:26:09.176 "compare_and_write": false, 00:26:09.176 "abort": true, 00:26:09.176 "seek_hole": false, 00:26:09.176 "seek_data": false, 00:26:09.176 "copy": true, 00:26:09.176 "nvme_iov_md": false 00:26:09.176 }, 00:26:09.176 "memory_domains": [ 00:26:09.176 { 00:26:09.177 "dma_device_id": "system", 00:26:09.177 "dma_device_type": 1 00:26:09.177 }, 00:26:09.177 { 00:26:09.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.177 "dma_device_type": 2 00:26:09.177 } 00:26:09.177 ], 00:26:09.177 "driver_specific": {} 00:26:09.177 } 00:26:09.177 ]' 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.177 [2024-11-26 17:22:46.607038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:26:09.177 [2024-11-26 17:22:46.607122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.177 [2024-11-26 17:22:46.607157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:09.177 [2024-11-26 17:22:46.607181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.177 [2024-11-26 17:22:46.609852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.177 [2024-11-26 17:22:46.609902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:26:09.177 Passthru0 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.177 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:26:09.437 { 00:26:09.437 "name": "Malloc2", 00:26:09.437 "aliases": [ 00:26:09.437 "9b576e10-ef37-4e13-9637-21521f63baf7" 00:26:09.437 ], 00:26:09.437 "product_name": "Malloc disk", 00:26:09.437 "block_size": 512, 00:26:09.437 "num_blocks": 16384, 00:26:09.437 "uuid": "9b576e10-ef37-4e13-9637-21521f63baf7", 00:26:09.437 "assigned_rate_limits": { 00:26:09.437 "rw_ios_per_sec": 0, 00:26:09.437 "rw_mbytes_per_sec": 0, 00:26:09.437 "r_mbytes_per_sec": 0, 00:26:09.437 "w_mbytes_per_sec": 0 00:26:09.437 }, 00:26:09.437 "claimed": true, 00:26:09.437 "claim_type": "exclusive_write", 00:26:09.437 "zoned": false, 00:26:09.437 "supported_io_types": { 00:26:09.437 "read": true, 00:26:09.437 "write": true, 00:26:09.437 "unmap": true, 00:26:09.437 "flush": true, 00:26:09.437 "reset": true, 00:26:09.437 "nvme_admin": false, 00:26:09.437 "nvme_io": false, 00:26:09.437 "nvme_io_md": false, 00:26:09.437 "write_zeroes": true, 00:26:09.437 "zcopy": true, 00:26:09.437 "get_zone_info": false, 00:26:09.437 "zone_management": false, 00:26:09.437 "zone_append": false, 00:26:09.437 "compare": false, 00:26:09.437 "compare_and_write": false, 00:26:09.437 "abort": true, 00:26:09.437 "seek_hole": false, 00:26:09.437 "seek_data": false, 00:26:09.437 "copy": true, 00:26:09.437 "nvme_iov_md": false 00:26:09.437 }, 00:26:09.437 "memory_domains": [ 00:26:09.437 { 00:26:09.437 "dma_device_id": "system", 00:26:09.437 "dma_device_type": 1 00:26:09.437 }, 00:26:09.437 { 00:26:09.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.437 "dma_device_type": 2 00:26:09.437 } 00:26:09.437 ], 00:26:09.437 "driver_specific": {} 00:26:09.437 }, 00:26:09.437 { 00:26:09.437 "name": "Passthru0", 00:26:09.437 "aliases": [ 00:26:09.437 "38a15dcd-c398-5e5b-b815-4b020806efe7" 00:26:09.437 ], 00:26:09.437 "product_name": "passthru", 00:26:09.437 "block_size": 512, 00:26:09.437 "num_blocks": 16384, 00:26:09.437 "uuid": "38a15dcd-c398-5e5b-b815-4b020806efe7", 00:26:09.437 "assigned_rate_limits": { 00:26:09.437 "rw_ios_per_sec": 0, 00:26:09.437 "rw_mbytes_per_sec": 0, 00:26:09.437 "r_mbytes_per_sec": 0, 00:26:09.437 "w_mbytes_per_sec": 0 00:26:09.437 }, 00:26:09.437 "claimed": false, 00:26:09.437 "zoned": false, 00:26:09.437 "supported_io_types": { 00:26:09.437 "read": true, 00:26:09.437 "write": true, 00:26:09.437 "unmap": true, 00:26:09.437 "flush": true, 00:26:09.437 "reset": true, 00:26:09.437 "nvme_admin": false, 00:26:09.437 "nvme_io": false, 00:26:09.437 "nvme_io_md": false, 00:26:09.437 "write_zeroes": true, 00:26:09.437 "zcopy": true, 00:26:09.437 "get_zone_info": false, 00:26:09.437 "zone_management": false, 00:26:09.437 "zone_append": false, 00:26:09.437 "compare": false, 00:26:09.437 "compare_and_write": false, 00:26:09.437 "abort": true, 00:26:09.437 "seek_hole": false, 00:26:09.437 "seek_data": false, 00:26:09.437 "copy": true, 00:26:09.437 "nvme_iov_md": false 00:26:09.437 }, 00:26:09.437 "memory_domains": [ 00:26:09.437 { 00:26:09.437 "dma_device_id": "system", 00:26:09.437 "dma_device_type": 1 00:26:09.437 }, 00:26:09.437 { 00:26:09.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.437 "dma_device_type": 2 00:26:09.437 } 00:26:09.437 ], 00:26:09.437 "driver_specific": { 00:26:09.437 "passthru": { 00:26:09.437 "name": "Passthru0", 00:26:09.437 "base_bdev_name": "Malloc2" 00:26:09.437 } 00:26:09.437 } 00:26:09.437 } 00:26:09.437 ]' 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:26:09.437 00:26:09.437 real 0m0.350s 00:26:09.437 user 0m0.200s 00:26:09.437 sys 0m0.037s 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.437 17:22:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.437 ************************************ 00:26:09.437 END TEST rpc_daemon_integrity 00:26:09.437 ************************************ 00:26:09.437 17:22:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:09.437 17:22:46 rpc -- rpc/rpc.sh@84 -- # killprocess 58032 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 58032 ']' 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@958 -- # kill -0 58032 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@959 -- # uname 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58032 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.437 killing process with pid 58032 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58032' 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@973 -- # kill 58032 00:26:09.437 17:22:46 rpc -- common/autotest_common.sh@978 -- # wait 58032 00:26:12.733 00:26:12.733 real 0m5.699s 00:26:12.733 user 0m6.321s 00:26:12.733 sys 0m0.958s 00:26:12.733 17:22:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.733 17:22:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:12.733 ************************************ 00:26:12.733 END TEST rpc 00:26:12.733 ************************************ 00:26:12.733 17:22:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:26:12.733 17:22:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:12.733 17:22:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.733 17:22:49 -- common/autotest_common.sh@10 -- # set +x 00:26:12.733 ************************************ 00:26:12.733 START TEST skip_rpc 00:26:12.733 ************************************ 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:26:12.733 * Looking for test storage... 00:26:12.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.733 17:22:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.733 --rc genhtml_branch_coverage=1 00:26:12.733 --rc genhtml_function_coverage=1 00:26:12.733 --rc genhtml_legend=1 00:26:12.733 --rc geninfo_all_blocks=1 00:26:12.733 --rc geninfo_unexecuted_blocks=1 00:26:12.733 00:26:12.733 ' 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.733 --rc genhtml_branch_coverage=1 00:26:12.733 --rc genhtml_function_coverage=1 00:26:12.733 --rc genhtml_legend=1 00:26:12.733 --rc geninfo_all_blocks=1 00:26:12.733 --rc geninfo_unexecuted_blocks=1 00:26:12.733 00:26:12.733 ' 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.733 --rc genhtml_branch_coverage=1 00:26:12.733 --rc genhtml_function_coverage=1 00:26:12.733 --rc genhtml_legend=1 00:26:12.733 --rc geninfo_all_blocks=1 00:26:12.733 --rc geninfo_unexecuted_blocks=1 00:26:12.733 00:26:12.733 ' 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.733 --rc genhtml_branch_coverage=1 00:26:12.733 --rc genhtml_function_coverage=1 00:26:12.733 --rc genhtml_legend=1 00:26:12.733 --rc geninfo_all_blocks=1 00:26:12.733 --rc geninfo_unexecuted_blocks=1 00:26:12.733 00:26:12.733 ' 00:26:12.733 17:22:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:12.733 17:22:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:12.733 17:22:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.733 17:22:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:12.733 ************************************ 00:26:12.733 START TEST skip_rpc 00:26:12.733 ************************************ 00:26:12.733 17:22:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:26:12.733 17:22:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58272 00:26:12.733 17:22:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:26:12.733 17:22:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:12.733 17:22:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:26:12.733 [2024-11-26 17:22:49.892770] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:12.733 [2024-11-26 17:22:49.892910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:26:12.733 [2024-11-26 17:22:50.073912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.993 [2024-11-26 17:22:50.207880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58272 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58272 ']' 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58272 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58272 00:26:18.338 killing process with pid 58272 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58272' 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58272 00:26:18.338 17:22:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58272 00:26:20.269 00:26:20.269 real 0m7.921s 00:26:20.269 user 0m7.427s 00:26:20.269 sys 0m0.397s 00:26:20.269 17:22:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.269 17:22:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:20.269 ************************************ 00:26:20.269 END TEST skip_rpc 00:26:20.269 ************************************ 00:26:20.528 17:22:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:26:20.528 17:22:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.528 17:22:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.528 17:22:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:20.528 ************************************ 00:26:20.528 START TEST skip_rpc_with_json 00:26:20.528 ************************************ 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58376 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58376 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58376 ']' 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.528 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:20.528 [2024-11-26 17:22:57.891252] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:20.528 [2024-11-26 17:22:57.891380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58376 ] 00:26:20.786 [2024-11-26 17:22:58.075945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.786 [2024-11-26 17:22:58.216328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.163 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.163 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:26:22.163 17:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:26:22.163 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.163 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:22.163 [2024-11-26 17:22:59.209674] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:26:22.163 request: 00:26:22.163 { 00:26:22.163 "trtype": "tcp", 00:26:22.163 "method": "nvmf_get_transports", 00:26:22.163 "req_id": 1 00:26:22.163 } 00:26:22.163 Got JSON-RPC error response 00:26:22.163 response: 00:26:22.163 { 00:26:22.163 "code": -19, 00:26:22.164 "message": "No such device" 00:26:22.164 } 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:22.164 [2024-11-26 17:22:59.221752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.164 17:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:22.164 { 00:26:22.164 "subsystems": [ 00:26:22.164 { 00:26:22.164 "subsystem": "fsdev", 00:26:22.164 "config": [ 00:26:22.164 { 00:26:22.164 "method": "fsdev_set_opts", 00:26:22.164 "params": { 00:26:22.164 "fsdev_io_pool_size": 65535, 00:26:22.164 "fsdev_io_cache_size": 256 00:26:22.164 } 00:26:22.164 } 00:26:22.164 ] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "keyring", 00:26:22.164 "config": [] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "iobuf", 00:26:22.164 "config": [ 00:26:22.164 { 00:26:22.164 "method": "iobuf_set_options", 00:26:22.164 "params": { 00:26:22.164 "small_pool_count": 8192, 00:26:22.164 "large_pool_count": 1024, 00:26:22.164 "small_bufsize": 8192, 00:26:22.164 "large_bufsize": 135168, 00:26:22.164 "enable_numa": false 00:26:22.164 } 00:26:22.164 } 00:26:22.164 ] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "sock", 00:26:22.164 "config": [ 00:26:22.164 { 00:26:22.164 "method": "sock_set_default_impl", 00:26:22.164 "params": { 00:26:22.164 "impl_name": "posix" 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "sock_impl_set_options", 00:26:22.164 "params": { 00:26:22.164 "impl_name": "ssl", 00:26:22.164 "recv_buf_size": 4096, 00:26:22.164 "send_buf_size": 4096, 00:26:22.164 "enable_recv_pipe": true, 00:26:22.164 "enable_quickack": false, 00:26:22.164 "enable_placement_id": 0, 00:26:22.164 "enable_zerocopy_send_server": true, 00:26:22.164 "enable_zerocopy_send_client": false, 00:26:22.164 "zerocopy_threshold": 0, 00:26:22.164 "tls_version": 0, 00:26:22.164 "enable_ktls": false 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "sock_impl_set_options", 00:26:22.164 "params": { 00:26:22.164 "impl_name": "posix", 00:26:22.164 "recv_buf_size": 2097152, 00:26:22.164 "send_buf_size": 2097152, 00:26:22.164 "enable_recv_pipe": true, 00:26:22.164 "enable_quickack": false, 00:26:22.164 "enable_placement_id": 0, 00:26:22.164 "enable_zerocopy_send_server": true, 00:26:22.164 "enable_zerocopy_send_client": false, 00:26:22.164 "zerocopy_threshold": 0, 00:26:22.164 "tls_version": 0, 00:26:22.164 "enable_ktls": false 00:26:22.164 } 00:26:22.164 } 00:26:22.164 ] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "vmd", 00:26:22.164 "config": [] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "accel", 00:26:22.164 "config": [ 00:26:22.164 { 00:26:22.164 "method": "accel_set_options", 00:26:22.164 "params": { 00:26:22.164 "small_cache_size": 128, 00:26:22.164 "large_cache_size": 16, 00:26:22.164 "task_count": 2048, 00:26:22.164 "sequence_count": 2048, 00:26:22.164 "buf_count": 2048 00:26:22.164 } 00:26:22.164 } 00:26:22.164 ] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "bdev", 00:26:22.164 "config": [ 00:26:22.164 { 00:26:22.164 "method": "bdev_set_options", 00:26:22.164 "params": { 00:26:22.164 "bdev_io_pool_size": 65535, 00:26:22.164 "bdev_io_cache_size": 256, 00:26:22.164 "bdev_auto_examine": true, 00:26:22.164 "iobuf_small_cache_size": 128, 00:26:22.164 "iobuf_large_cache_size": 16 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "bdev_raid_set_options", 00:26:22.164 "params": { 00:26:22.164 "process_window_size_kb": 1024, 00:26:22.164 "process_max_bandwidth_mb_sec": 0 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "bdev_iscsi_set_options", 00:26:22.164 "params": { 00:26:22.164 "timeout_sec": 30 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "bdev_nvme_set_options", 00:26:22.164 "params": { 00:26:22.164 "action_on_timeout": "none", 00:26:22.164 "timeout_us": 0, 00:26:22.164 "timeout_admin_us": 0, 00:26:22.164 "keep_alive_timeout_ms": 10000, 00:26:22.164 "arbitration_burst": 0, 00:26:22.164 "low_priority_weight": 0, 00:26:22.164 "medium_priority_weight": 0, 00:26:22.164 "high_priority_weight": 0, 00:26:22.164 "nvme_adminq_poll_period_us": 10000, 00:26:22.164 "nvme_ioq_poll_period_us": 0, 00:26:22.164 "io_queue_requests": 0, 00:26:22.164 "delay_cmd_submit": true, 00:26:22.164 "transport_retry_count": 4, 00:26:22.164 "bdev_retry_count": 3, 00:26:22.164 "transport_ack_timeout": 0, 00:26:22.164 "ctrlr_loss_timeout_sec": 0, 00:26:22.164 "reconnect_delay_sec": 0, 00:26:22.164 "fast_io_fail_timeout_sec": 0, 00:26:22.164 "disable_auto_failback": false, 00:26:22.164 "generate_uuids": false, 00:26:22.164 "transport_tos": 0, 00:26:22.164 "nvme_error_stat": false, 00:26:22.164 "rdma_srq_size": 0, 00:26:22.164 "io_path_stat": false, 00:26:22.164 "allow_accel_sequence": false, 00:26:22.164 "rdma_max_cq_size": 0, 00:26:22.164 "rdma_cm_event_timeout_ms": 0, 00:26:22.164 "dhchap_digests": [ 00:26:22.164 "sha256", 00:26:22.164 "sha384", 00:26:22.164 "sha512" 00:26:22.164 ], 00:26:22.164 "dhchap_dhgroups": [ 00:26:22.164 "null", 00:26:22.164 "ffdhe2048", 00:26:22.164 "ffdhe3072", 00:26:22.164 "ffdhe4096", 00:26:22.164 "ffdhe6144", 00:26:22.164 "ffdhe8192" 00:26:22.164 ] 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "bdev_nvme_set_hotplug", 00:26:22.164 "params": { 00:26:22.164 "period_us": 100000, 00:26:22.164 "enable": false 00:26:22.164 } 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "method": "bdev_wait_for_examine" 00:26:22.164 } 00:26:22.164 ] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "scsi", 00:26:22.164 "config": null 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "scheduler", 00:26:22.164 "config": [ 00:26:22.164 { 00:26:22.164 "method": "framework_set_scheduler", 00:26:22.164 "params": { 00:26:22.164 "name": "static" 00:26:22.164 } 00:26:22.164 } 00:26:22.164 ] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "vhost_scsi", 00:26:22.164 "config": [] 00:26:22.164 }, 00:26:22.164 { 00:26:22.164 "subsystem": "vhost_blk", 00:26:22.164 "config": [] 00:26:22.164 }, 00:26:22.164 { 00:26:22.165 "subsystem": "ublk", 00:26:22.165 "config": [] 00:26:22.165 }, 00:26:22.165 { 00:26:22.165 "subsystem": "nbd", 00:26:22.165 "config": [] 00:26:22.165 }, 00:26:22.165 { 00:26:22.165 "subsystem": "nvmf", 00:26:22.165 "config": [ 00:26:22.165 { 00:26:22.165 "method": "nvmf_set_config", 00:26:22.165 "params": { 00:26:22.165 "discovery_filter": "match_any", 00:26:22.165 "admin_cmd_passthru": { 00:26:22.165 "identify_ctrlr": false 00:26:22.165 }, 00:26:22.165 "dhchap_digests": [ 00:26:22.165 "sha256", 00:26:22.165 "sha384", 00:26:22.165 "sha512" 00:26:22.165 ], 00:26:22.165 "dhchap_dhgroups": [ 00:26:22.165 "null", 00:26:22.165 "ffdhe2048", 00:26:22.165 "ffdhe3072", 00:26:22.165 "ffdhe4096", 00:26:22.165 "ffdhe6144", 00:26:22.165 "ffdhe8192" 00:26:22.165 ] 00:26:22.165 } 00:26:22.165 }, 00:26:22.165 { 00:26:22.165 "method": "nvmf_set_max_subsystems", 00:26:22.165 "params": { 00:26:22.165 "max_subsystems": 1024 00:26:22.165 } 00:26:22.165 }, 00:26:22.165 { 00:26:22.165 "method": "nvmf_set_crdt", 00:26:22.165 "params": { 00:26:22.165 "crdt1": 0, 00:26:22.165 "crdt2": 0, 00:26:22.165 "crdt3": 0 00:26:22.165 } 00:26:22.165 }, 00:26:22.165 { 00:26:22.165 "method": "nvmf_create_transport", 00:26:22.165 "params": { 00:26:22.165 "trtype": "TCP", 00:26:22.165 "max_queue_depth": 128, 00:26:22.165 "max_io_qpairs_per_ctrlr": 127, 00:26:22.165 "in_capsule_data_size": 4096, 00:26:22.165 "max_io_size": 131072, 00:26:22.165 "io_unit_size": 131072, 00:26:22.165 "max_aq_depth": 128, 00:26:22.165 "num_shared_buffers": 511, 00:26:22.165 "buf_cache_size": 4294967295, 00:26:22.165 "dif_insert_or_strip": false, 00:26:22.165 "zcopy": false, 00:26:22.165 "c2h_success": true, 00:26:22.165 "sock_priority": 0, 00:26:22.165 "abort_timeout_sec": 1, 00:26:22.165 "ack_timeout": 0, 00:26:22.165 "data_wr_pool_size": 0 00:26:22.165 } 00:26:22.165 } 00:26:22.165 ] 00:26:22.165 }, 00:26:22.165 { 00:26:22.165 "subsystem": "iscsi", 00:26:22.165 "config": [ 00:26:22.165 { 00:26:22.165 "method": "iscsi_set_options", 00:26:22.165 "params": { 00:26:22.165 "node_base": "iqn.2016-06.io.spdk", 00:26:22.165 "max_sessions": 128, 00:26:22.165 "max_connections_per_session": 2, 00:26:22.165 "max_queue_depth": 64, 00:26:22.165 "default_time2wait": 2, 00:26:22.165 "default_time2retain": 20, 00:26:22.165 "first_burst_length": 8192, 00:26:22.165 "immediate_data": true, 00:26:22.165 "allow_duplicated_isid": false, 00:26:22.165 "error_recovery_level": 0, 00:26:22.165 "nop_timeout": 60, 00:26:22.165 "nop_in_interval": 30, 00:26:22.165 "disable_chap": false, 00:26:22.165 "require_chap": false, 00:26:22.165 "mutual_chap": false, 00:26:22.165 "chap_group": 0, 00:26:22.165 "max_large_datain_per_connection": 64, 00:26:22.165 "max_r2t_per_connection": 4, 00:26:22.165 "pdu_pool_size": 36864, 00:26:22.165 "immediate_data_pool_size": 16384, 00:26:22.165 "data_out_pool_size": 2048 00:26:22.165 } 00:26:22.165 } 00:26:22.165 ] 00:26:22.165 } 00:26:22.165 ] 00:26:22.165 } 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58376 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58376 ']' 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58376 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58376 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.165 killing process with pid 58376 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58376' 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58376 00:26:22.165 17:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58376 00:26:24.711 17:23:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58438 00:26:24.712 17:23:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:24.712 17:23:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58438 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58438 ']' 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58438 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58438 00:26:29.985 killing process with pid 58438 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58438' 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58438 00:26:29.985 17:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58438 00:26:32.521 17:23:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:32.521 17:23:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:32.521 00:26:32.521 real 0m12.013s 00:26:32.521 user 0m11.517s 00:26:32.521 sys 0m0.907s 00:26:32.521 17:23:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.521 17:23:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:32.521 ************************************ 00:26:32.521 END TEST skip_rpc_with_json 00:26:32.521 ************************************ 00:26:32.521 17:23:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:26:32.522 17:23:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:32.522 17:23:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.522 17:23:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:32.522 ************************************ 00:26:32.522 START TEST skip_rpc_with_delay 00:26:32.522 ************************************ 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:32.522 17:23:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:32.779 [2024-11-26 17:23:09.977313] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:26:32.779 17:23:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:26:32.779 17:23:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:32.779 17:23:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:32.779 17:23:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:32.779 00:26:32.779 real 0m0.192s 00:26:32.779 user 0m0.085s 00:26:32.779 sys 0m0.104s 00:26:32.779 17:23:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.779 17:23:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:26:32.779 ************************************ 00:26:32.779 END TEST skip_rpc_with_delay 00:26:32.779 ************************************ 00:26:32.779 17:23:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:26:32.779 17:23:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:26:32.779 17:23:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:26:32.779 17:23:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:32.779 17:23:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.779 17:23:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:32.779 ************************************ 00:26:32.779 START TEST exit_on_failed_rpc_init 00:26:32.779 ************************************ 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58571 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58571 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58571 ']' 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.779 17:23:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.038 [2024-11-26 17:23:10.230935] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:33.038 [2024-11-26 17:23:10.231058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58571 ] 00:26:33.038 [2024-11-26 17:23:10.406883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.297 [2024-11-26 17:23:10.531572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:34.233 17:23:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:34.492 [2024-11-26 17:23:11.688469] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:34.492 [2024-11-26 17:23:11.688628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:26:34.492 [2024-11-26 17:23:11.873929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.751 [2024-11-26 17:23:12.001100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.751 [2024-11-26 17:23:12.001206] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:34.751 [2024-11-26 17:23:12.001221] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:34.751 [2024-11-26 17:23:12.001239] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58571 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58571 ']' 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58571 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58571 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.009 killing process with pid 58571 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58571' 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58571 00:26:35.009 17:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58571 00:26:38.298 00:26:38.298 real 0m5.022s 00:26:38.298 user 0m5.411s 00:26:38.298 sys 0m0.660s 00:26:38.298 17:23:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.298 17:23:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:38.298 ************************************ 00:26:38.298 END TEST exit_on_failed_rpc_init 00:26:38.298 ************************************ 00:26:38.298 17:23:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:38.298 00:26:38.298 real 0m25.634s 00:26:38.298 user 0m24.634s 00:26:38.298 sys 0m2.371s 00:26:38.298 17:23:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.298 17:23:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:38.298 ************************************ 00:26:38.298 END TEST skip_rpc 00:26:38.298 ************************************ 00:26:38.298 17:23:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:38.298 17:23:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:38.298 17:23:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.298 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:26:38.298 ************************************ 00:26:38.298 START TEST rpc_client 00:26:38.298 ************************************ 00:26:38.298 17:23:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:38.298 * Looking for test storage... 00:26:38.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:26:38.298 17:23:15 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:38.298 17:23:15 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.299 17:23:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.299 --rc genhtml_branch_coverage=1 00:26:38.299 --rc genhtml_function_coverage=1 00:26:38.299 --rc genhtml_legend=1 00:26:38.299 --rc geninfo_all_blocks=1 00:26:38.299 --rc geninfo_unexecuted_blocks=1 00:26:38.299 00:26:38.299 ' 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.299 --rc genhtml_branch_coverage=1 00:26:38.299 --rc genhtml_function_coverage=1 00:26:38.299 --rc genhtml_legend=1 00:26:38.299 --rc geninfo_all_blocks=1 00:26:38.299 --rc geninfo_unexecuted_blocks=1 00:26:38.299 00:26:38.299 ' 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.299 --rc genhtml_branch_coverage=1 00:26:38.299 --rc genhtml_function_coverage=1 00:26:38.299 --rc genhtml_legend=1 00:26:38.299 --rc geninfo_all_blocks=1 00:26:38.299 --rc geninfo_unexecuted_blocks=1 00:26:38.299 00:26:38.299 ' 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.299 --rc genhtml_branch_coverage=1 00:26:38.299 --rc genhtml_function_coverage=1 00:26:38.299 --rc genhtml_legend=1 00:26:38.299 --rc geninfo_all_blocks=1 00:26:38.299 --rc geninfo_unexecuted_blocks=1 00:26:38.299 00:26:38.299 ' 00:26:38.299 17:23:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:26:38.299 OK 00:26:38.299 17:23:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:26:38.299 00:26:38.299 real 0m0.301s 00:26:38.299 user 0m0.166s 00:26:38.299 sys 0m0.157s 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.299 17:23:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:26:38.299 ************************************ 00:26:38.299 END TEST rpc_client 00:26:38.299 ************************************ 00:26:38.299 17:23:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:38.299 17:23:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:38.299 17:23:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.299 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:26:38.299 ************************************ 00:26:38.299 START TEST json_config 00:26:38.299 ************************************ 00:26:38.299 17:23:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:38.299 17:23:15 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:38.299 17:23:15 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:26:38.299 17:23:15 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:38.559 17:23:15 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.559 17:23:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.559 17:23:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.559 17:23:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.559 17:23:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.559 17:23:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.559 17:23:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:26:38.559 17:23:15 json_config -- scripts/common.sh@345 -- # : 1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.559 17:23:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.559 17:23:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@353 -- # local d=1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.559 17:23:15 json_config -- scripts/common.sh@355 -- # echo 1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.559 17:23:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@353 -- # local d=2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.559 17:23:15 json_config -- scripts/common.sh@355 -- # echo 2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.559 17:23:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.559 17:23:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.559 17:23:15 json_config -- scripts/common.sh@368 -- # return 0 00:26:38.559 17:23:15 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.559 17:23:15 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:38.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.559 --rc genhtml_branch_coverage=1 00:26:38.559 --rc genhtml_function_coverage=1 00:26:38.559 --rc genhtml_legend=1 00:26:38.559 --rc geninfo_all_blocks=1 00:26:38.559 --rc geninfo_unexecuted_blocks=1 00:26:38.559 00:26:38.559 ' 00:26:38.559 17:23:15 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:38.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.559 --rc genhtml_branch_coverage=1 00:26:38.559 --rc genhtml_function_coverage=1 00:26:38.559 --rc genhtml_legend=1 00:26:38.559 --rc geninfo_all_blocks=1 00:26:38.559 --rc geninfo_unexecuted_blocks=1 00:26:38.559 00:26:38.559 ' 00:26:38.559 17:23:15 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:38.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.559 --rc genhtml_branch_coverage=1 00:26:38.559 --rc genhtml_function_coverage=1 00:26:38.559 --rc genhtml_legend=1 00:26:38.559 --rc geninfo_all_blocks=1 00:26:38.559 --rc geninfo_unexecuted_blocks=1 00:26:38.559 00:26:38.559 ' 00:26:38.559 17:23:15 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:38.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.559 --rc genhtml_branch_coverage=1 00:26:38.559 --rc genhtml_function_coverage=1 00:26:38.559 --rc genhtml_legend=1 00:26:38.559 --rc geninfo_all_blocks=1 00:26:38.559 --rc geninfo_unexecuted_blocks=1 00:26:38.559 00:26:38.559 ' 00:26:38.559 17:23:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1430afe-7853-490a-a832-69c50badaf60 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e1430afe-7853-490a-a832-69c50badaf60 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.559 17:23:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.559 17:23:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.559 17:23:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.559 17:23:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.559 17:23:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.559 17:23:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.559 17:23:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.559 17:23:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.560 17:23:15 json_config -- paths/export.sh@5 -- # export PATH 00:26:38.560 17:23:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@51 -- # : 0 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.560 17:23:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:26:38.560 WARNING: No tests are enabled so not running JSON configuration tests 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:26:38.560 17:23:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:26:38.560 00:26:38.560 real 0m0.231s 00:26:38.560 user 0m0.134s 00:26:38.560 sys 0m0.108s 00:26:38.560 17:23:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.560 17:23:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 ************************************ 00:26:38.560 END TEST json_config 00:26:38.560 ************************************ 00:26:38.560 17:23:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:38.560 17:23:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:38.560 17:23:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.560 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 ************************************ 00:26:38.560 START TEST json_config_extra_key 00:26:38.560 ************************************ 00:26:38.560 17:23:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.821 17:23:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.821 --rc genhtml_branch_coverage=1 00:26:38.821 --rc genhtml_function_coverage=1 00:26:38.821 --rc genhtml_legend=1 00:26:38.821 --rc geninfo_all_blocks=1 00:26:38.821 --rc geninfo_unexecuted_blocks=1 00:26:38.821 00:26:38.821 ' 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.821 --rc genhtml_branch_coverage=1 00:26:38.821 --rc genhtml_function_coverage=1 00:26:38.821 --rc genhtml_legend=1 00:26:38.821 --rc geninfo_all_blocks=1 00:26:38.821 --rc geninfo_unexecuted_blocks=1 00:26:38.821 00:26:38.821 ' 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.821 --rc genhtml_branch_coverage=1 00:26:38.821 --rc genhtml_function_coverage=1 00:26:38.821 --rc genhtml_legend=1 00:26:38.821 --rc geninfo_all_blocks=1 00:26:38.821 --rc geninfo_unexecuted_blocks=1 00:26:38.821 00:26:38.821 ' 00:26:38.821 17:23:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.821 --rc genhtml_branch_coverage=1 00:26:38.821 --rc genhtml_function_coverage=1 00:26:38.821 --rc genhtml_legend=1 00:26:38.821 --rc geninfo_all_blocks=1 00:26:38.821 --rc geninfo_unexecuted_blocks=1 00:26:38.821 00:26:38.821 ' 00:26:38.821 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1430afe-7853-490a-a832-69c50badaf60 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e1430afe-7853-490a-a832-69c50badaf60 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.821 17:23:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.822 17:23:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.822 17:23:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.822 17:23:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.822 17:23:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.822 17:23:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.822 17:23:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.822 17:23:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.822 17:23:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:26:38.822 17:23:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.822 17:23:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:26:38.822 INFO: launching applications... 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:26:38.822 17:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58810 00:26:38.822 Waiting for target to run... 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58810 /var/tmp/spdk_tgt.sock 00:26:38.822 17:23:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58810 ']' 00:26:38.822 17:23:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:38.822 17:23:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:38.822 17:23:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:38.822 17:23:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:38.822 17:23:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.822 17:23:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:39.088 [2024-11-26 17:23:16.286185] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:39.088 [2024-11-26 17:23:16.286353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:26:39.667 [2024-11-26 17:23:16.832287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.667 [2024-11-26 17:23:16.960421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.602 17:23:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.602 17:23:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:26:40.602 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:26:40.602 INFO: shutting down applications... 00:26:40.602 17:23:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:26:40.602 17:23:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58810 ]] 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58810 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:40.602 17:23:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:41.170 17:23:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:41.170 17:23:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:41.170 17:23:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:41.170 17:23:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:41.430 17:23:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:41.430 17:23:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:41.430 17:23:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:41.430 17:23:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:41.995 17:23:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:41.995 17:23:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:41.995 17:23:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:41.995 17:23:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:42.559 17:23:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:42.559 17:23:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:42.559 17:23:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:42.559 17:23:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:43.123 17:23:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:43.123 17:23:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:43.123 17:23:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:43.123 17:23:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58810 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:26:43.689 SPDK target shutdown done 00:26:43.689 17:23:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:26:43.689 Success 00:26:43.689 17:23:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:26:43.689 00:26:43.689 real 0m4.954s 00:26:43.689 user 0m4.585s 00:26:43.689 sys 0m0.735s 00:26:43.689 17:23:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.689 17:23:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:43.689 ************************************ 00:26:43.689 END TEST json_config_extra_key 00:26:43.689 ************************************ 00:26:43.689 17:23:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:43.689 17:23:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:43.689 17:23:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.689 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:43.689 ************************************ 00:26:43.689 START TEST alias_rpc 00:26:43.689 ************************************ 00:26:43.689 17:23:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:43.689 * Looking for test storage... 00:26:43.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:26:43.689 17:23:20 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:43.689 17:23:20 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:43.689 17:23:20 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.689 17:23:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:43.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.689 --rc genhtml_branch_coverage=1 00:26:43.689 --rc genhtml_function_coverage=1 00:26:43.689 --rc genhtml_legend=1 00:26:43.689 --rc geninfo_all_blocks=1 00:26:43.689 --rc geninfo_unexecuted_blocks=1 00:26:43.689 00:26:43.689 ' 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:43.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.689 --rc genhtml_branch_coverage=1 00:26:43.689 --rc genhtml_function_coverage=1 00:26:43.689 --rc genhtml_legend=1 00:26:43.689 --rc geninfo_all_blocks=1 00:26:43.689 --rc geninfo_unexecuted_blocks=1 00:26:43.689 00:26:43.689 ' 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:43.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.689 --rc genhtml_branch_coverage=1 00:26:43.689 --rc genhtml_function_coverage=1 00:26:43.689 --rc genhtml_legend=1 00:26:43.689 --rc geninfo_all_blocks=1 00:26:43.689 --rc geninfo_unexecuted_blocks=1 00:26:43.689 00:26:43.689 ' 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:43.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.689 --rc genhtml_branch_coverage=1 00:26:43.689 --rc genhtml_function_coverage=1 00:26:43.689 --rc genhtml_legend=1 00:26:43.689 --rc geninfo_all_blocks=1 00:26:43.689 --rc geninfo_unexecuted_blocks=1 00:26:43.689 00:26:43.689 ' 00:26:43.689 17:23:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:43.689 17:23:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.689 17:23:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58922 00:26:43.689 17:23:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58922 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58922 ']' 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.689 17:23:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:43.947 [2024-11-26 17:23:21.215589] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:43.947 [2024-11-26 17:23:21.215810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:26:43.947 [2024-11-26 17:23:21.390038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.203 [2024-11-26 17:23:21.567268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.633 17:23:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.633 17:23:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:45.633 17:23:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:26:45.633 17:23:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58922 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58922 ']' 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58922 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58922 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:45.633 killing process with pid 58922 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58922' 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 58922 00:26:45.633 17:23:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 58922 00:26:48.921 00:26:48.921 real 0m4.832s 00:26:48.921 user 0m4.859s 00:26:48.921 sys 0m0.723s 00:26:48.921 17:23:25 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.921 17:23:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:48.921 ************************************ 00:26:48.921 END TEST alias_rpc 00:26:48.921 ************************************ 00:26:48.921 17:23:25 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:26:48.921 17:23:25 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:48.921 17:23:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:48.921 17:23:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.921 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:26:48.921 ************************************ 00:26:48.921 START TEST spdkcli_tcp 00:26:48.921 ************************************ 00:26:48.921 17:23:25 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:48.921 * Looking for test storage... 00:26:48.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:48.921 17:23:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:48.921 17:23:25 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:26:48.921 17:23:25 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:48.921 17:23:26 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:48.921 17:23:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.922 17:23:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:48.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.922 --rc genhtml_branch_coverage=1 00:26:48.922 --rc genhtml_function_coverage=1 00:26:48.922 --rc genhtml_legend=1 00:26:48.922 --rc geninfo_all_blocks=1 00:26:48.922 --rc geninfo_unexecuted_blocks=1 00:26:48.922 00:26:48.922 ' 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:48.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.922 --rc genhtml_branch_coverage=1 00:26:48.922 --rc genhtml_function_coverage=1 00:26:48.922 --rc genhtml_legend=1 00:26:48.922 --rc geninfo_all_blocks=1 00:26:48.922 --rc geninfo_unexecuted_blocks=1 00:26:48.922 00:26:48.922 ' 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:48.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.922 --rc genhtml_branch_coverage=1 00:26:48.922 --rc genhtml_function_coverage=1 00:26:48.922 --rc genhtml_legend=1 00:26:48.922 --rc geninfo_all_blocks=1 00:26:48.922 --rc geninfo_unexecuted_blocks=1 00:26:48.922 00:26:48.922 ' 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:48.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.922 --rc genhtml_branch_coverage=1 00:26:48.922 --rc genhtml_function_coverage=1 00:26:48.922 --rc genhtml_legend=1 00:26:48.922 --rc geninfo_all_blocks=1 00:26:48.922 --rc geninfo_unexecuted_blocks=1 00:26:48.922 00:26:48.922 ' 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59040 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:48.922 17:23:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59040 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59040 ']' 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.922 17:23:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.922 [2024-11-26 17:23:26.174712] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:48.922 [2024-11-26 17:23:26.174863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:26:49.188 [2024-11-26 17:23:26.372207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:49.188 [2024-11-26 17:23:26.501957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.188 [2024-11-26 17:23:26.501990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.132 17:23:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.132 17:23:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:26:50.132 17:23:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:26:50.133 17:23:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59057 00:26:50.133 17:23:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:26:50.392 [ 00:26:50.392 "bdev_malloc_delete", 00:26:50.392 "bdev_malloc_create", 00:26:50.392 "bdev_null_resize", 00:26:50.392 "bdev_null_delete", 00:26:50.392 "bdev_null_create", 00:26:50.392 "bdev_nvme_cuse_unregister", 00:26:50.392 "bdev_nvme_cuse_register", 00:26:50.392 "bdev_opal_new_user", 00:26:50.392 "bdev_opal_set_lock_state", 00:26:50.392 "bdev_opal_delete", 00:26:50.392 "bdev_opal_get_info", 00:26:50.392 "bdev_opal_create", 00:26:50.392 "bdev_nvme_opal_revert", 00:26:50.392 "bdev_nvme_opal_init", 00:26:50.392 "bdev_nvme_send_cmd", 00:26:50.392 "bdev_nvme_set_keys", 00:26:50.392 "bdev_nvme_get_path_iostat", 00:26:50.392 "bdev_nvme_get_mdns_discovery_info", 00:26:50.392 "bdev_nvme_stop_mdns_discovery", 00:26:50.392 "bdev_nvme_start_mdns_discovery", 00:26:50.392 "bdev_nvme_set_multipath_policy", 00:26:50.392 "bdev_nvme_set_preferred_path", 00:26:50.392 "bdev_nvme_get_io_paths", 00:26:50.392 "bdev_nvme_remove_error_injection", 00:26:50.392 "bdev_nvme_add_error_injection", 00:26:50.392 "bdev_nvme_get_discovery_info", 00:26:50.392 "bdev_nvme_stop_discovery", 00:26:50.392 "bdev_nvme_start_discovery", 00:26:50.392 "bdev_nvme_get_controller_health_info", 00:26:50.392 "bdev_nvme_disable_controller", 00:26:50.392 "bdev_nvme_enable_controller", 00:26:50.392 "bdev_nvme_reset_controller", 00:26:50.392 "bdev_nvme_get_transport_statistics", 00:26:50.392 "bdev_nvme_apply_firmware", 00:26:50.392 "bdev_nvme_detach_controller", 00:26:50.392 "bdev_nvme_get_controllers", 00:26:50.392 "bdev_nvme_attach_controller", 00:26:50.392 "bdev_nvme_set_hotplug", 00:26:50.392 "bdev_nvme_set_options", 00:26:50.392 "bdev_passthru_delete", 00:26:50.392 "bdev_passthru_create", 00:26:50.392 "bdev_lvol_set_parent_bdev", 00:26:50.392 "bdev_lvol_set_parent", 00:26:50.392 "bdev_lvol_check_shallow_copy", 00:26:50.392 "bdev_lvol_start_shallow_copy", 00:26:50.392 "bdev_lvol_grow_lvstore", 00:26:50.392 "bdev_lvol_get_lvols", 00:26:50.392 "bdev_lvol_get_lvstores", 00:26:50.392 "bdev_lvol_delete", 00:26:50.392 "bdev_lvol_set_read_only", 00:26:50.392 "bdev_lvol_resize", 00:26:50.392 "bdev_lvol_decouple_parent", 00:26:50.392 "bdev_lvol_inflate", 00:26:50.392 "bdev_lvol_rename", 00:26:50.392 "bdev_lvol_clone_bdev", 00:26:50.392 "bdev_lvol_clone", 00:26:50.392 "bdev_lvol_snapshot", 00:26:50.392 "bdev_lvol_create", 00:26:50.392 "bdev_lvol_delete_lvstore", 00:26:50.392 "bdev_lvol_rename_lvstore", 00:26:50.392 "bdev_lvol_create_lvstore", 00:26:50.392 "bdev_raid_set_options", 00:26:50.392 "bdev_raid_remove_base_bdev", 00:26:50.392 "bdev_raid_add_base_bdev", 00:26:50.392 "bdev_raid_delete", 00:26:50.392 "bdev_raid_create", 00:26:50.392 "bdev_raid_get_bdevs", 00:26:50.392 "bdev_error_inject_error", 00:26:50.392 "bdev_error_delete", 00:26:50.392 "bdev_error_create", 00:26:50.392 "bdev_split_delete", 00:26:50.392 "bdev_split_create", 00:26:50.392 "bdev_delay_delete", 00:26:50.392 "bdev_delay_create", 00:26:50.392 "bdev_delay_update_latency", 00:26:50.392 "bdev_zone_block_delete", 00:26:50.392 "bdev_zone_block_create", 00:26:50.392 "blobfs_create", 00:26:50.392 "blobfs_detect", 00:26:50.392 "blobfs_set_cache_size", 00:26:50.392 "bdev_xnvme_delete", 00:26:50.392 "bdev_xnvme_create", 00:26:50.392 "bdev_aio_delete", 00:26:50.392 "bdev_aio_rescan", 00:26:50.392 "bdev_aio_create", 00:26:50.392 "bdev_ftl_set_property", 00:26:50.392 "bdev_ftl_get_properties", 00:26:50.392 "bdev_ftl_get_stats", 00:26:50.392 "bdev_ftl_unmap", 00:26:50.392 "bdev_ftl_unload", 00:26:50.392 "bdev_ftl_delete", 00:26:50.392 "bdev_ftl_load", 00:26:50.392 "bdev_ftl_create", 00:26:50.392 "bdev_virtio_attach_controller", 00:26:50.392 "bdev_virtio_scsi_get_devices", 00:26:50.392 "bdev_virtio_detach_controller", 00:26:50.392 "bdev_virtio_blk_set_hotplug", 00:26:50.392 "bdev_iscsi_delete", 00:26:50.392 "bdev_iscsi_create", 00:26:50.392 "bdev_iscsi_set_options", 00:26:50.392 "accel_error_inject_error", 00:26:50.392 "ioat_scan_accel_module", 00:26:50.392 "dsa_scan_accel_module", 00:26:50.392 "iaa_scan_accel_module", 00:26:50.392 "keyring_file_remove_key", 00:26:50.392 "keyring_file_add_key", 00:26:50.392 "keyring_linux_set_options", 00:26:50.392 "fsdev_aio_delete", 00:26:50.392 "fsdev_aio_create", 00:26:50.392 "iscsi_get_histogram", 00:26:50.392 "iscsi_enable_histogram", 00:26:50.392 "iscsi_set_options", 00:26:50.392 "iscsi_get_auth_groups", 00:26:50.392 "iscsi_auth_group_remove_secret", 00:26:50.392 "iscsi_auth_group_add_secret", 00:26:50.392 "iscsi_delete_auth_group", 00:26:50.392 "iscsi_create_auth_group", 00:26:50.392 "iscsi_set_discovery_auth", 00:26:50.392 "iscsi_get_options", 00:26:50.392 "iscsi_target_node_request_logout", 00:26:50.392 "iscsi_target_node_set_redirect", 00:26:50.392 "iscsi_target_node_set_auth", 00:26:50.392 "iscsi_target_node_add_lun", 00:26:50.392 "iscsi_get_stats", 00:26:50.392 "iscsi_get_connections", 00:26:50.392 "iscsi_portal_group_set_auth", 00:26:50.392 "iscsi_start_portal_group", 00:26:50.392 "iscsi_delete_portal_group", 00:26:50.392 "iscsi_create_portal_group", 00:26:50.392 "iscsi_get_portal_groups", 00:26:50.392 "iscsi_delete_target_node", 00:26:50.392 "iscsi_target_node_remove_pg_ig_maps", 00:26:50.392 "iscsi_target_node_add_pg_ig_maps", 00:26:50.392 "iscsi_create_target_node", 00:26:50.392 "iscsi_get_target_nodes", 00:26:50.392 "iscsi_delete_initiator_group", 00:26:50.392 "iscsi_initiator_group_remove_initiators", 00:26:50.392 "iscsi_initiator_group_add_initiators", 00:26:50.392 "iscsi_create_initiator_group", 00:26:50.392 "iscsi_get_initiator_groups", 00:26:50.392 "nvmf_set_crdt", 00:26:50.392 "nvmf_set_config", 00:26:50.392 "nvmf_set_max_subsystems", 00:26:50.392 "nvmf_stop_mdns_prr", 00:26:50.392 "nvmf_publish_mdns_prr", 00:26:50.392 "nvmf_subsystem_get_listeners", 00:26:50.392 "nvmf_subsystem_get_qpairs", 00:26:50.392 "nvmf_subsystem_get_controllers", 00:26:50.392 "nvmf_get_stats", 00:26:50.392 "nvmf_get_transports", 00:26:50.392 "nvmf_create_transport", 00:26:50.392 "nvmf_get_targets", 00:26:50.393 "nvmf_delete_target", 00:26:50.393 "nvmf_create_target", 00:26:50.393 "nvmf_subsystem_allow_any_host", 00:26:50.393 "nvmf_subsystem_set_keys", 00:26:50.393 "nvmf_subsystem_remove_host", 00:26:50.393 "nvmf_subsystem_add_host", 00:26:50.393 "nvmf_ns_remove_host", 00:26:50.393 "nvmf_ns_add_host", 00:26:50.393 "nvmf_subsystem_remove_ns", 00:26:50.393 "nvmf_subsystem_set_ns_ana_group", 00:26:50.393 "nvmf_subsystem_add_ns", 00:26:50.393 "nvmf_subsystem_listener_set_ana_state", 00:26:50.393 "nvmf_discovery_get_referrals", 00:26:50.393 "nvmf_discovery_remove_referral", 00:26:50.393 "nvmf_discovery_add_referral", 00:26:50.393 "nvmf_subsystem_remove_listener", 00:26:50.393 "nvmf_subsystem_add_listener", 00:26:50.393 "nvmf_delete_subsystem", 00:26:50.393 "nvmf_create_subsystem", 00:26:50.393 "nvmf_get_subsystems", 00:26:50.393 "env_dpdk_get_mem_stats", 00:26:50.393 "nbd_get_disks", 00:26:50.393 "nbd_stop_disk", 00:26:50.393 "nbd_start_disk", 00:26:50.393 "ublk_recover_disk", 00:26:50.393 "ublk_get_disks", 00:26:50.393 "ublk_stop_disk", 00:26:50.393 "ublk_start_disk", 00:26:50.393 "ublk_destroy_target", 00:26:50.393 "ublk_create_target", 00:26:50.393 "virtio_blk_create_transport", 00:26:50.393 "virtio_blk_get_transports", 00:26:50.393 "vhost_controller_set_coalescing", 00:26:50.393 "vhost_get_controllers", 00:26:50.393 "vhost_delete_controller", 00:26:50.393 "vhost_create_blk_controller", 00:26:50.393 "vhost_scsi_controller_remove_target", 00:26:50.393 "vhost_scsi_controller_add_target", 00:26:50.393 "vhost_start_scsi_controller", 00:26:50.393 "vhost_create_scsi_controller", 00:26:50.393 "thread_set_cpumask", 00:26:50.393 "scheduler_set_options", 00:26:50.393 "framework_get_governor", 00:26:50.393 "framework_get_scheduler", 00:26:50.393 "framework_set_scheduler", 00:26:50.393 "framework_get_reactors", 00:26:50.393 "thread_get_io_channels", 00:26:50.393 "thread_get_pollers", 00:26:50.393 "thread_get_stats", 00:26:50.393 "framework_monitor_context_switch", 00:26:50.393 "spdk_kill_instance", 00:26:50.393 "log_enable_timestamps", 00:26:50.393 "log_get_flags", 00:26:50.393 "log_clear_flag", 00:26:50.393 "log_set_flag", 00:26:50.393 "log_get_level", 00:26:50.393 "log_set_level", 00:26:50.393 "log_get_print_level", 00:26:50.393 "log_set_print_level", 00:26:50.393 "framework_enable_cpumask_locks", 00:26:50.393 "framework_disable_cpumask_locks", 00:26:50.393 "framework_wait_init", 00:26:50.393 "framework_start_init", 00:26:50.393 "scsi_get_devices", 00:26:50.393 "bdev_get_histogram", 00:26:50.393 "bdev_enable_histogram", 00:26:50.393 "bdev_set_qos_limit", 00:26:50.393 "bdev_set_qd_sampling_period", 00:26:50.393 "bdev_get_bdevs", 00:26:50.393 "bdev_reset_iostat", 00:26:50.393 "bdev_get_iostat", 00:26:50.393 "bdev_examine", 00:26:50.393 "bdev_wait_for_examine", 00:26:50.393 "bdev_set_options", 00:26:50.393 "accel_get_stats", 00:26:50.393 "accel_set_options", 00:26:50.393 "accel_set_driver", 00:26:50.393 "accel_crypto_key_destroy", 00:26:50.393 "accel_crypto_keys_get", 00:26:50.393 "accel_crypto_key_create", 00:26:50.393 "accel_assign_opc", 00:26:50.393 "accel_get_module_info", 00:26:50.393 "accel_get_opc_assignments", 00:26:50.393 "vmd_rescan", 00:26:50.393 "vmd_remove_device", 00:26:50.393 "vmd_enable", 00:26:50.393 "sock_get_default_impl", 00:26:50.393 "sock_set_default_impl", 00:26:50.393 "sock_impl_set_options", 00:26:50.393 "sock_impl_get_options", 00:26:50.393 "iobuf_get_stats", 00:26:50.393 "iobuf_set_options", 00:26:50.393 "keyring_get_keys", 00:26:50.393 "framework_get_pci_devices", 00:26:50.393 "framework_get_config", 00:26:50.393 "framework_get_subsystems", 00:26:50.393 "fsdev_set_opts", 00:26:50.393 "fsdev_get_opts", 00:26:50.393 "trace_get_info", 00:26:50.393 "trace_get_tpoint_group_mask", 00:26:50.393 "trace_disable_tpoint_group", 00:26:50.393 "trace_enable_tpoint_group", 00:26:50.393 "trace_clear_tpoint_mask", 00:26:50.393 "trace_set_tpoint_mask", 00:26:50.393 "notify_get_notifications", 00:26:50.393 "notify_get_types", 00:26:50.393 "spdk_get_version", 00:26:50.393 "rpc_get_methods" 00:26:50.393 ] 00:26:50.393 17:23:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.393 17:23:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:50.393 17:23:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59040 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59040 ']' 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59040 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59040 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:50.393 killing process with pid 59040 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59040' 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59040 00:26:50.393 17:23:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59040 00:26:53.679 ************************************ 00:26:53.679 END TEST spdkcli_tcp 00:26:53.679 ************************************ 00:26:53.679 00:26:53.679 real 0m4.846s 00:26:53.679 user 0m8.741s 00:26:53.679 sys 0m0.733s 00:26:53.679 17:23:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.679 17:23:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.679 17:23:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:53.679 17:23:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:53.679 17:23:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.679 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:53.679 ************************************ 00:26:53.679 START TEST dpdk_mem_utility 00:26:53.679 ************************************ 00:26:53.679 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:53.679 * Looking for test storage... 00:26:53.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.680 17:23:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.680 --rc genhtml_branch_coverage=1 00:26:53.680 --rc genhtml_function_coverage=1 00:26:53.680 --rc genhtml_legend=1 00:26:53.680 --rc geninfo_all_blocks=1 00:26:53.680 --rc geninfo_unexecuted_blocks=1 00:26:53.680 00:26:53.680 ' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.680 --rc genhtml_branch_coverage=1 00:26:53.680 --rc genhtml_function_coverage=1 00:26:53.680 --rc genhtml_legend=1 00:26:53.680 --rc geninfo_all_blocks=1 00:26:53.680 --rc geninfo_unexecuted_blocks=1 00:26:53.680 00:26:53.680 ' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.680 --rc genhtml_branch_coverage=1 00:26:53.680 --rc genhtml_function_coverage=1 00:26:53.680 --rc genhtml_legend=1 00:26:53.680 --rc geninfo_all_blocks=1 00:26:53.680 --rc geninfo_unexecuted_blocks=1 00:26:53.680 00:26:53.680 ' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.680 --rc genhtml_branch_coverage=1 00:26:53.680 --rc genhtml_function_coverage=1 00:26:53.680 --rc genhtml_legend=1 00:26:53.680 --rc geninfo_all_blocks=1 00:26:53.680 --rc geninfo_unexecuted_blocks=1 00:26:53.680 00:26:53.680 ' 00:26:53.680 17:23:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:53.680 17:23:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.680 17:23:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59173 00:26:53.680 17:23:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59173 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59173 ']' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.680 17:23:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:53.680 [2024-11-26 17:23:31.059043] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:53.680 [2024-11-26 17:23:31.059184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:26:53.939 [2024-11-26 17:23:31.223105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.939 [2024-11-26 17:23:31.369098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.337 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.337 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:26:55.337 17:23:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:26:55.337 17:23:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:26:55.337 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.337 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:55.337 { 00:26:55.337 "filename": "/tmp/spdk_mem_dump.txt" 00:26:55.337 } 00:26:55.337 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.337 17:23:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:55.337 DPDK memory size 824.000000 MiB in 1 heap(s) 00:26:55.337 1 heaps totaling size 824.000000 MiB 00:26:55.337 size: 824.000000 MiB heap id: 0 00:26:55.337 end heaps---------- 00:26:55.337 9 mempools totaling size 603.782043 MiB 00:26:55.337 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:26:55.337 size: 158.602051 MiB name: PDU_data_out_Pool 00:26:55.337 size: 100.555481 MiB name: bdev_io_59173 00:26:55.337 size: 50.003479 MiB name: msgpool_59173 00:26:55.337 size: 36.509338 MiB name: fsdev_io_59173 00:26:55.337 size: 21.763794 MiB name: PDU_Pool 00:26:55.337 size: 19.513306 MiB name: SCSI_TASK_Pool 00:26:55.337 size: 4.133484 MiB name: evtpool_59173 00:26:55.337 size: 0.026123 MiB name: Session_Pool 00:26:55.337 end mempools------- 00:26:55.337 6 memzones totaling size 4.142822 MiB 00:26:55.337 size: 1.000366 MiB name: RG_ring_0_59173 00:26:55.337 size: 1.000366 MiB name: RG_ring_1_59173 00:26:55.337 size: 1.000366 MiB name: RG_ring_4_59173 00:26:55.337 size: 1.000366 MiB name: RG_ring_5_59173 00:26:55.337 size: 0.125366 MiB name: RG_ring_2_59173 00:26:55.337 size: 0.015991 MiB name: RG_ring_3_59173 00:26:55.337 end memzones------- 00:26:55.337 17:23:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:26:55.337 heap id: 0 total size: 824.000000 MiB number of busy elements: 318 number of free elements: 18 00:26:55.337 list of free elements. size: 16.780640 MiB 00:26:55.337 element at address: 0x200006400000 with size: 1.995972 MiB 00:26:55.337 element at address: 0x20000a600000 with size: 1.995972 MiB 00:26:55.337 element at address: 0x200003e00000 with size: 1.991028 MiB 00:26:55.337 element at address: 0x200019500040 with size: 0.999939 MiB 00:26:55.337 element at address: 0x200019900040 with size: 0.999939 MiB 00:26:55.337 element at address: 0x200019a00000 with size: 0.999084 MiB 00:26:55.337 element at address: 0x200032600000 with size: 0.994324 MiB 00:26:55.337 element at address: 0x200000400000 with size: 0.992004 MiB 00:26:55.337 element at address: 0x200019200000 with size: 0.959656 MiB 00:26:55.337 element at address: 0x200019d00040 with size: 0.936401 MiB 00:26:55.337 element at address: 0x200000200000 with size: 0.716980 MiB 00:26:55.337 element at address: 0x20001b400000 with size: 0.561951 MiB 00:26:55.337 element at address: 0x200000c00000 with size: 0.489197 MiB 00:26:55.337 element at address: 0x200019600000 with size: 0.487976 MiB 00:26:55.337 element at address: 0x200019e00000 with size: 0.485413 MiB 00:26:55.337 element at address: 0x200012c00000 with size: 0.433472 MiB 00:26:55.337 element at address: 0x200028800000 with size: 0.390442 MiB 00:26:55.337 element at address: 0x200000800000 with size: 0.350891 MiB 00:26:55.337 list of standard malloc elements. size: 199.288452 MiB 00:26:55.337 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:26:55.337 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:26:55.337 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:26:55.337 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:26:55.337 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:26:55.337 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:26:55.337 element at address: 0x200019deff40 with size: 0.062683 MiB 00:26:55.337 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:26:55.337 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:26:55.337 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:26:55.337 element at address: 0x200012bff040 with size: 0.000305 MiB 00:26:55.337 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:26:55.337 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:26:55.337 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:26:55.337 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200000cff000 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff180 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff280 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff380 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff480 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff580 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff680 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff780 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff880 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bff980 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200019affc40 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:26:55.338 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:26:55.339 element at address: 0x200028863f40 with size: 0.000244 MiB 00:26:55.339 element at address: 0x200028864040 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886af80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b080 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b180 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b280 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b380 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b480 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b580 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b680 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b780 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b880 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886b980 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886be80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c080 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c180 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c280 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c380 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c480 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c580 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c680 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c780 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c880 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886c980 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d080 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d180 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d280 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d380 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d480 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d580 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d680 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d780 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d880 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886d980 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886da80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886db80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886de80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886df80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e080 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e180 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e280 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e380 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e480 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e580 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e680 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e780 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e880 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886e980 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f080 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f180 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f280 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f380 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f480 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f580 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f680 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f780 with size: 0.000244 MiB 00:26:55.339 element at address: 0x20002886f880 with size: 0.000244 MiB 00:26:55.340 element at address: 0x20002886f980 with size: 0.000244 MiB 00:26:55.340 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:26:55.340 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:26:55.340 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:26:55.340 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:26:55.340 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:26:55.340 list of memzone associated elements. size: 607.930908 MiB 00:26:55.340 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:26:55.340 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:26:55.340 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:26:55.340 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:26:55.340 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:26:55.340 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59173_0 00:26:55.340 element at address: 0x200000dff340 with size: 48.003113 MiB 00:26:55.340 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59173_0 00:26:55.340 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:26:55.340 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59173_0 00:26:55.340 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:26:55.340 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:26:55.340 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:26:55.340 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:26:55.340 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:26:55.340 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59173_0 00:26:55.340 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:26:55.340 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59173 00:26:55.340 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:26:55.340 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59173 00:26:55.340 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:26:55.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:26:55.340 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:26:55.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:26:55.340 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:26:55.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:26:55.340 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:26:55.340 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:26:55.340 element at address: 0x200000cff100 with size: 1.000549 MiB 00:26:55.340 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59173 00:26:55.340 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:26:55.340 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59173 00:26:55.340 element at address: 0x200019affd40 with size: 1.000549 MiB 00:26:55.340 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59173 00:26:55.340 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:26:55.340 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59173 00:26:55.340 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:26:55.340 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59173 00:26:55.340 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:26:55.340 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59173 00:26:55.340 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:26:55.340 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:26:55.340 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:26:55.340 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:26:55.340 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:26:55.340 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:26:55.340 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:26:55.340 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59173 00:26:55.340 element at address: 0x20000085df80 with size: 0.125549 MiB 00:26:55.340 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59173 00:26:55.340 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:26:55.340 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:26:55.340 element at address: 0x200028864140 with size: 0.023804 MiB 00:26:55.340 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:26:55.340 element at address: 0x200000859d40 with size: 0.016174 MiB 00:26:55.340 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59173 00:26:55.340 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:26:55.340 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:26:55.340 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:26:55.340 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59173 00:26:55.340 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:26:55.340 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59173 00:26:55.340 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:26:55.340 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59173 00:26:55.340 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:26:55.340 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:26:55.340 17:23:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:26:55.340 17:23:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59173 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59173 ']' 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59173 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59173 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.340 killing process with pid 59173 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59173' 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59173 00:26:55.340 17:23:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59173 00:26:58.625 00:26:58.625 real 0m4.854s 00:26:58.625 user 0m4.627s 00:26:58.625 sys 0m0.759s 00:26:58.625 17:23:35 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.625 17:23:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:58.625 ************************************ 00:26:58.625 END TEST dpdk_mem_utility 00:26:58.625 ************************************ 00:26:58.625 17:23:35 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:58.625 17:23:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:58.625 17:23:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.625 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:26:58.625 ************************************ 00:26:58.625 START TEST event 00:26:58.625 ************************************ 00:26:58.625 17:23:35 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:58.625 * Looking for test storage... 00:26:58.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:58.625 17:23:35 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.625 17:23:35 event -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.625 17:23:35 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.625 17:23:35 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.625 17:23:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.625 17:23:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.625 17:23:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.625 17:23:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.625 17:23:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.626 17:23:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.626 17:23:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.626 17:23:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.626 17:23:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.626 17:23:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.626 17:23:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.626 17:23:35 event -- scripts/common.sh@344 -- # case "$op" in 00:26:58.626 17:23:35 event -- scripts/common.sh@345 -- # : 1 00:26:58.626 17:23:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.626 17:23:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.626 17:23:35 event -- scripts/common.sh@365 -- # decimal 1 00:26:58.626 17:23:35 event -- scripts/common.sh@353 -- # local d=1 00:26:58.626 17:23:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.626 17:23:35 event -- scripts/common.sh@355 -- # echo 1 00:26:58.626 17:23:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.626 17:23:35 event -- scripts/common.sh@366 -- # decimal 2 00:26:58.626 17:23:35 event -- scripts/common.sh@353 -- # local d=2 00:26:58.626 17:23:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.626 17:23:35 event -- scripts/common.sh@355 -- # echo 2 00:26:58.626 17:23:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.626 17:23:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.626 17:23:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.626 17:23:35 event -- scripts/common.sh@368 -- # return 0 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.626 --rc genhtml_branch_coverage=1 00:26:58.626 --rc genhtml_function_coverage=1 00:26:58.626 --rc genhtml_legend=1 00:26:58.626 --rc geninfo_all_blocks=1 00:26:58.626 --rc geninfo_unexecuted_blocks=1 00:26:58.626 00:26:58.626 ' 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.626 --rc genhtml_branch_coverage=1 00:26:58.626 --rc genhtml_function_coverage=1 00:26:58.626 --rc genhtml_legend=1 00:26:58.626 --rc geninfo_all_blocks=1 00:26:58.626 --rc geninfo_unexecuted_blocks=1 00:26:58.626 00:26:58.626 ' 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.626 --rc genhtml_branch_coverage=1 00:26:58.626 --rc genhtml_function_coverage=1 00:26:58.626 --rc genhtml_legend=1 00:26:58.626 --rc geninfo_all_blocks=1 00:26:58.626 --rc geninfo_unexecuted_blocks=1 00:26:58.626 00:26:58.626 ' 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.626 --rc genhtml_branch_coverage=1 00:26:58.626 --rc genhtml_function_coverage=1 00:26:58.626 --rc genhtml_legend=1 00:26:58.626 --rc geninfo_all_blocks=1 00:26:58.626 --rc geninfo_unexecuted_blocks=1 00:26:58.626 00:26:58.626 ' 00:26:58.626 17:23:35 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:58.626 17:23:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:26:58.626 17:23:35 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:26:58.626 17:23:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.626 17:23:35 event -- common/autotest_common.sh@10 -- # set +x 00:26:58.626 ************************************ 00:26:58.626 START TEST event_perf 00:26:58.626 ************************************ 00:26:58.626 17:23:35 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:58.626 Running I/O for 1 seconds...[2024-11-26 17:23:35.917194] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:58.626 [2024-11-26 17:23:35.917320] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:26:58.884 [2024-11-26 17:23:36.103739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.884 [2024-11-26 17:23:36.233844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.884 [2024-11-26 17:23:36.233938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.884 [2024-11-26 17:23:36.234024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.884 Running I/O for 1 seconds...[2024-11-26 17:23:36.234057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.258 00:27:00.258 lcore 0: 98218 00:27:00.258 lcore 1: 98219 00:27:00.258 lcore 2: 98218 00:27:00.258 lcore 3: 98221 00:27:00.258 done. 00:27:00.258 00:27:00.258 real 0m1.638s 00:27:00.258 user 0m4.385s 00:27:00.258 sys 0m0.124s 00:27:00.258 17:23:37 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.258 17:23:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:27:00.258 ************************************ 00:27:00.258 END TEST event_perf 00:27:00.258 ************************************ 00:27:00.258 17:23:37 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:00.258 17:23:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:00.258 17:23:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.258 17:23:37 event -- common/autotest_common.sh@10 -- # set +x 00:27:00.258 ************************************ 00:27:00.258 START TEST event_reactor 00:27:00.258 ************************************ 00:27:00.258 17:23:37 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:00.258 [2024-11-26 17:23:37.619567] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:00.259 [2024-11-26 17:23:37.620378] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:27:00.533 [2024-11-26 17:23:37.824726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.533 [2024-11-26 17:23:37.969581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.924 test_start 00:27:01.924 oneshot 00:27:01.924 tick 100 00:27:01.924 tick 100 00:27:01.924 tick 250 00:27:01.924 tick 100 00:27:01.924 tick 100 00:27:01.924 tick 100 00:27:01.924 tick 250 00:27:01.924 tick 500 00:27:01.924 tick 100 00:27:01.924 tick 100 00:27:01.924 tick 250 00:27:01.924 tick 100 00:27:01.924 tick 100 00:27:01.924 test_end 00:27:01.924 ************************************ 00:27:01.924 END TEST event_reactor 00:27:01.924 ************************************ 00:27:01.924 00:27:01.924 real 0m1.668s 00:27:01.924 user 0m1.431s 00:27:01.924 sys 0m0.126s 00:27:01.924 17:23:39 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.924 17:23:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:27:01.924 17:23:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:01.924 17:23:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:01.924 17:23:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.924 17:23:39 event -- common/autotest_common.sh@10 -- # set +x 00:27:01.924 ************************************ 00:27:01.924 START TEST event_reactor_perf 00:27:01.924 ************************************ 00:27:01.924 17:23:39 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:01.924 [2024-11-26 17:23:39.335135] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:01.924 [2024-11-26 17:23:39.335410] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:27:02.183 [2024-11-26 17:23:39.519657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.441 [2024-11-26 17:23:39.646099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.817 test_start 00:27:03.817 test_end 00:27:03.817 Performance: 327222 events per second 00:27:03.817 00:27:03.817 real 0m1.627s 00:27:03.817 user 0m1.413s 00:27:03.817 sys 0m0.103s 00:27:03.817 17:23:40 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.817 17:23:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 ************************************ 00:27:03.817 END TEST event_reactor_perf 00:27:03.817 ************************************ 00:27:03.817 17:23:40 event -- event/event.sh@49 -- # uname -s 00:27:03.817 17:23:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:27:03.817 17:23:40 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:03.817 17:23:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:03.817 17:23:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.817 17:23:40 event -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 ************************************ 00:27:03.817 START TEST event_scheduler 00:27:03.817 ************************************ 00:27:03.817 17:23:40 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:03.817 * Looking for test storage... 00:27:03.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:27:03.817 17:23:41 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:03.817 17:23:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:27:03.817 17:23:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:03.817 17:23:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.817 17:23:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.818 17:23:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:03.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.818 --rc genhtml_branch_coverage=1 00:27:03.818 --rc genhtml_function_coverage=1 00:27:03.818 --rc genhtml_legend=1 00:27:03.818 --rc geninfo_all_blocks=1 00:27:03.818 --rc geninfo_unexecuted_blocks=1 00:27:03.818 00:27:03.818 ' 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:03.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.818 --rc genhtml_branch_coverage=1 00:27:03.818 --rc genhtml_function_coverage=1 00:27:03.818 --rc genhtml_legend=1 00:27:03.818 --rc geninfo_all_blocks=1 00:27:03.818 --rc geninfo_unexecuted_blocks=1 00:27:03.818 00:27:03.818 ' 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:03.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.818 --rc genhtml_branch_coverage=1 00:27:03.818 --rc genhtml_function_coverage=1 00:27:03.818 --rc genhtml_legend=1 00:27:03.818 --rc geninfo_all_blocks=1 00:27:03.818 --rc geninfo_unexecuted_blocks=1 00:27:03.818 00:27:03.818 ' 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:03.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.818 --rc genhtml_branch_coverage=1 00:27:03.818 --rc genhtml_function_coverage=1 00:27:03.818 --rc genhtml_legend=1 00:27:03.818 --rc geninfo_all_blocks=1 00:27:03.818 --rc geninfo_unexecuted_blocks=1 00:27:03.818 00:27:03.818 ' 00:27:03.818 17:23:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:27:03.818 17:23:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59439 00:27:03.818 17:23:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:27:03.818 17:23:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:27:03.818 17:23:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59439 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59439 ']' 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.818 17:23:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:04.076 [2024-11-26 17:23:41.281573] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:04.076 [2024-11-26 17:23:41.281822] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59439 ] 00:27:04.076 [2024-11-26 17:23:41.475945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.334 [2024-11-26 17:23:41.668867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.334 [2024-11-26 17:23:41.669093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.334 [2024-11-26 17:23:41.669228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.334 [2024-11-26 17:23:41.669242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:04.911 17:23:42 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.911 17:23:42 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:27:04.911 17:23:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:27:04.911 17:23:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.912 17:23:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:04.912 POWER: Cannot set governor of lcore 0 to userspace 00:27:04.912 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:04.912 POWER: Cannot set governor of lcore 0 to performance 00:27:04.912 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:04.912 POWER: Cannot set governor of lcore 0 to userspace 00:27:04.912 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:04.912 POWER: Cannot set governor of lcore 0 to userspace 00:27:04.912 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:27:04.912 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:27:04.912 POWER: Unable to set Power Management Environment for lcore 0 00:27:04.912 [2024-11-26 17:23:42.262554] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:27:04.912 [2024-11-26 17:23:42.262590] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:27:04.912 [2024-11-26 17:23:42.262603] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:27:04.912 [2024-11-26 17:23:42.262645] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:27:04.912 [2024-11-26 17:23:42.262656] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:27:04.912 [2024-11-26 17:23:42.262667] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:27:04.912 17:23:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.912 17:23:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:27:04.912 17:23:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.912 17:23:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 [2024-11-26 17:23:42.718005] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:27:05.481 17:23:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:27:05.481 17:23:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:05.481 17:23:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 ************************************ 00:27:05.481 START TEST scheduler_create_thread 00:27:05.481 ************************************ 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 2 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 3 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 4 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 5 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 6 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 7 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 8 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 9 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 10 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.481 17:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:06.417 17:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.417 17:23:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:27:06.417 17:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.417 17:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:07.354 17:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.354 17:23:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:27:07.354 17:23:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:27:07.354 17:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.354 17:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:08.300 ************************************ 00:27:08.300 END TEST scheduler_create_thread 00:27:08.300 ************************************ 00:27:08.300 17:23:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.300 00:27:08.301 real 0m2.797s 00:27:08.301 user 0m0.022s 00:27:08.301 sys 0m0.010s 00:27:08.301 17:23:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.301 17:23:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:08.301 17:23:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:08.301 17:23:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59439 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59439 ']' 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59439 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59439 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59439' 00:27:08.301 killing process with pid 59439 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59439 00:27:08.301 17:23:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59439 00:27:08.560 [2024-11-26 17:23:45.907029] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:27:09.940 00:27:09.940 real 0m6.370s 00:27:09.940 user 0m14.121s 00:27:09.940 sys 0m0.634s 00:27:09.940 17:23:47 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.940 17:23:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:09.940 ************************************ 00:27:09.940 END TEST event_scheduler 00:27:09.940 ************************************ 00:27:10.200 17:23:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:27:10.200 17:23:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:27:10.200 17:23:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.200 17:23:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.200 17:23:47 event -- common/autotest_common.sh@10 -- # set +x 00:27:10.200 ************************************ 00:27:10.200 START TEST app_repeat 00:27:10.200 ************************************ 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59556 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59556' 00:27:10.200 Process app_repeat pid: 59556 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:27:10.200 spdk_app_start Round 0 00:27:10.200 17:23:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59556 ']' 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:10.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.200 17:23:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:10.200 [2024-11-26 17:23:47.499828] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:10.200 [2024-11-26 17:23:47.500048] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59556 ] 00:27:10.459 [2024-11-26 17:23:47.679815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:10.459 [2024-11-26 17:23:47.811177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.459 [2024-11-26 17:23:47.811209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.026 17:23:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.026 17:23:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:11.026 17:23:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:11.292 Malloc0 00:27:11.292 17:23:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:11.564 Malloc1 00:27:11.824 17:23:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:11.824 17:23:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:12.083 /dev/nbd0 00:27:12.083 17:23:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:12.083 17:23:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:12.083 1+0 records in 00:27:12.083 1+0 records out 00:27:12.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513065 s, 8.0 MB/s 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:12.083 17:23:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:12.083 17:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:12.083 17:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:12.083 17:23:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:12.342 /dev/nbd1 00:27:12.342 17:23:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:12.342 17:23:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:12.342 17:23:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:12.343 1+0 records in 00:27:12.343 1+0 records out 00:27:12.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480105 s, 8.5 MB/s 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:12.343 17:23:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:12.343 17:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:12.343 17:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:12.343 17:23:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:12.343 17:23:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:12.343 17:23:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:12.603 { 00:27:12.603 "nbd_device": "/dev/nbd0", 00:27:12.603 "bdev_name": "Malloc0" 00:27:12.603 }, 00:27:12.603 { 00:27:12.603 "nbd_device": "/dev/nbd1", 00:27:12.603 "bdev_name": "Malloc1" 00:27:12.603 } 00:27:12.603 ]' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:12.603 { 00:27:12.603 "nbd_device": "/dev/nbd0", 00:27:12.603 "bdev_name": "Malloc0" 00:27:12.603 }, 00:27:12.603 { 00:27:12.603 "nbd_device": "/dev/nbd1", 00:27:12.603 "bdev_name": "Malloc1" 00:27:12.603 } 00:27:12.603 ]' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:12.603 /dev/nbd1' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:12.603 /dev/nbd1' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:12.603 256+0 records in 00:27:12.603 256+0 records out 00:27:12.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766282 s, 137 MB/s 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:12.603 256+0 records in 00:27:12.603 256+0 records out 00:27:12.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246366 s, 42.6 MB/s 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:12.603 256+0 records in 00:27:12.603 256+0 records out 00:27:12.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269881 s, 38.9 MB/s 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:12.603 17:23:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:12.604 17:23:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:12.864 17:23:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:13.124 17:23:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:13.383 17:23:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:13.383 17:23:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:13.952 17:23:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:15.333 [2024-11-26 17:23:52.619822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:15.333 [2024-11-26 17:23:52.745118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.333 [2024-11-26 17:23:52.745120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.593 [2024-11-26 17:23:52.962142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:15.593 [2024-11-26 17:23:52.962351] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:16.971 17:23:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:16.971 spdk_app_start Round 1 00:27:16.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:16.971 17:23:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:27:16.971 17:23:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:27:16.971 17:23:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59556 ']' 00:27:16.971 17:23:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:16.971 17:23:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.971 17:23:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:16.971 17:23:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.971 17:23:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:17.230 17:23:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.230 17:23:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:17.230 17:23:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:17.489 Malloc0 00:27:17.489 17:23:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:17.749 Malloc1 00:27:17.749 17:23:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:17.749 17:23:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:18.010 /dev/nbd0 00:27:18.010 17:23:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:18.010 17:23:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:18.010 1+0 records in 00:27:18.010 1+0 records out 00:27:18.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239363 s, 17.1 MB/s 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:18.010 17:23:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:18.010 17:23:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.010 17:23:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.010 17:23:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:18.270 /dev/nbd1 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:18.270 1+0 records in 00:27:18.270 1+0 records out 00:27:18.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397339 s, 10.3 MB/s 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:18.270 17:23:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:18.270 17:23:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:18.529 { 00:27:18.529 "nbd_device": "/dev/nbd0", 00:27:18.529 "bdev_name": "Malloc0" 00:27:18.529 }, 00:27:18.529 { 00:27:18.529 "nbd_device": "/dev/nbd1", 00:27:18.529 "bdev_name": "Malloc1" 00:27:18.529 } 00:27:18.529 ]' 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:18.529 { 00:27:18.529 "nbd_device": "/dev/nbd0", 00:27:18.529 "bdev_name": "Malloc0" 00:27:18.529 }, 00:27:18.529 { 00:27:18.529 "nbd_device": "/dev/nbd1", 00:27:18.529 "bdev_name": "Malloc1" 00:27:18.529 } 00:27:18.529 ]' 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:18.529 /dev/nbd1' 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:18.529 /dev/nbd1' 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.529 17:23:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:18.530 17:23:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:18.530 17:23:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:18.530 17:23:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:18.530 17:23:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:18.801 256+0 records in 00:27:18.801 256+0 records out 00:27:18.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128249 s, 81.8 MB/s 00:27:18.801 17:23:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:18.801 17:23:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:18.801 256+0 records in 00:27:18.801 256+0 records out 00:27:18.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282512 s, 37.1 MB/s 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:18.801 256+0 records in 00:27:18.801 256+0 records out 00:27:18.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032302 s, 32.5 MB/s 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:18.801 17:23:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.079 17:23:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:19.336 17:23:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:19.594 17:23:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:19.594 17:23:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:20.162 17:23:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:21.541 [2024-11-26 17:23:58.706672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.541 [2024-11-26 17:23:58.838386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.541 [2024-11-26 17:23:58.838410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.800 [2024-11-26 17:23:59.071436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:21.800 [2024-11-26 17:23:59.071553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:23.179 spdk_app_start Round 2 00:27:23.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:23.179 17:24:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:23.179 17:24:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:27:23.179 17:24:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59556 ']' 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.179 17:24:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:23.180 17:24:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:23.439 Malloc0 00:27:23.439 17:24:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:23.698 Malloc1 00:27:23.958 17:24:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:23.958 17:24:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:23.958 /dev/nbd0 00:27:24.217 17:24:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:24.217 17:24:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:24.217 1+0 records in 00:27:24.217 1+0 records out 00:27:24.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386131 s, 10.6 MB/s 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:24.217 17:24:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:24.217 17:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:24.217 17:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:24.218 17:24:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:24.218 /dev/nbd1 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:24.477 1+0 records in 00:27:24.477 1+0 records out 00:27:24.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297609 s, 13.8 MB/s 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:24.477 17:24:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.477 17:24:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:24.736 17:24:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:24.737 { 00:27:24.737 "nbd_device": "/dev/nbd0", 00:27:24.737 "bdev_name": "Malloc0" 00:27:24.737 }, 00:27:24.737 { 00:27:24.737 "nbd_device": "/dev/nbd1", 00:27:24.737 "bdev_name": "Malloc1" 00:27:24.737 } 00:27:24.737 ]' 00:27:24.737 17:24:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:24.737 { 00:27:24.737 "nbd_device": "/dev/nbd0", 00:27:24.737 "bdev_name": "Malloc0" 00:27:24.737 }, 00:27:24.737 { 00:27:24.737 "nbd_device": "/dev/nbd1", 00:27:24.737 "bdev_name": "Malloc1" 00:27:24.737 } 00:27:24.737 ]' 00:27:24.737 17:24:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:24.737 /dev/nbd1' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:24.737 /dev/nbd1' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:24.737 256+0 records in 00:27:24.737 256+0 records out 00:27:24.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013549 s, 77.4 MB/s 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:24.737 256+0 records in 00:27:24.737 256+0 records out 00:27:24.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250614 s, 41.8 MB/s 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:24.737 256+0 records in 00:27:24.737 256+0 records out 00:27:24.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277875 s, 37.7 MB/s 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:24.737 17:24:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:24.997 17:24:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:25.257 17:24:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:25.516 17:24:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:25.517 17:24:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:25.517 17:24:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:26.085 17:24:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:27.465 [2024-11-26 17:24:04.800194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:27.725 [2024-11-26 17:24:04.933565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.725 [2024-11-26 17:24:04.933563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.725 [2024-11-26 17:24:05.166171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:27.725 [2024-11-26 17:24:05.166293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:29.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:29.107 17:24:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:27:29.107 17:24:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59556 ']' 00:27:29.107 17:24:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:29.107 17:24:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.107 17:24:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:29.107 17:24:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.107 17:24:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:29.367 17:24:06 event.app_repeat -- event/event.sh@39 -- # killprocess 59556 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59556 ']' 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59556 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59556 00:27:29.367 killing process with pid 59556 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59556' 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59556 00:27:29.367 17:24:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59556 00:27:30.747 spdk_app_start is called in Round 0. 00:27:30.747 Shutdown signal received, stop current app iteration 00:27:30.747 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:27:30.747 spdk_app_start is called in Round 1. 00:27:30.747 Shutdown signal received, stop current app iteration 00:27:30.747 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:27:30.747 spdk_app_start is called in Round 2. 00:27:30.747 Shutdown signal received, stop current app iteration 00:27:30.747 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:27:30.747 spdk_app_start is called in Round 3. 00:27:30.747 Shutdown signal received, stop current app iteration 00:27:30.747 17:24:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:27:30.747 17:24:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:27:30.747 00:27:30.747 real 0m20.509s 00:27:30.747 user 0m44.077s 00:27:30.747 sys 0m2.935s 00:27:30.747 17:24:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.747 ************************************ 00:27:30.747 END TEST app_repeat 00:27:30.747 ************************************ 00:27:30.747 17:24:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:30.747 17:24:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:27:30.747 17:24:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:30.747 17:24:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:30.747 17:24:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.747 17:24:07 event -- common/autotest_common.sh@10 -- # set +x 00:27:30.747 ************************************ 00:27:30.747 START TEST cpu_locks 00:27:30.747 ************************************ 00:27:30.747 17:24:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:30.747 * Looking for test storage... 00:27:30.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:30.747 17:24:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:30.747 17:24:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.747 17:24:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.007 17:24:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.007 --rc genhtml_branch_coverage=1 00:27:31.007 --rc genhtml_function_coverage=1 00:27:31.007 --rc genhtml_legend=1 00:27:31.007 --rc geninfo_all_blocks=1 00:27:31.007 --rc geninfo_unexecuted_blocks=1 00:27:31.007 00:27:31.007 ' 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.007 --rc genhtml_branch_coverage=1 00:27:31.007 --rc genhtml_function_coverage=1 00:27:31.007 --rc genhtml_legend=1 00:27:31.007 --rc geninfo_all_blocks=1 00:27:31.007 --rc geninfo_unexecuted_blocks=1 00:27:31.007 00:27:31.007 ' 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.007 --rc genhtml_branch_coverage=1 00:27:31.007 --rc genhtml_function_coverage=1 00:27:31.007 --rc genhtml_legend=1 00:27:31.007 --rc geninfo_all_blocks=1 00:27:31.007 --rc geninfo_unexecuted_blocks=1 00:27:31.007 00:27:31.007 ' 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.007 --rc genhtml_branch_coverage=1 00:27:31.007 --rc genhtml_function_coverage=1 00:27:31.007 --rc genhtml_legend=1 00:27:31.007 --rc geninfo_all_blocks=1 00:27:31.007 --rc geninfo_unexecuted_blocks=1 00:27:31.007 00:27:31.007 ' 00:27:31.007 17:24:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:27:31.007 17:24:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:27:31.007 17:24:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:27:31.007 17:24:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.007 17:24:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:31.007 ************************************ 00:27:31.007 START TEST default_locks 00:27:31.007 ************************************ 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60016 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60016 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60016 ']' 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.007 17:24:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:31.007 [2024-11-26 17:24:08.354976] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:31.007 [2024-11-26 17:24:08.355153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:27:31.285 [2024-11-26 17:24:08.543362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.285 [2024-11-26 17:24:08.680215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.662 17:24:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.662 17:24:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:27:32.662 17:24:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60016 00:27:32.662 17:24:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60016 00:27:32.662 17:24:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:32.920 17:24:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60016 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60016 ']' 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60016 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60016 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.921 killing process with pid 60016 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60016' 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60016 00:27:32.921 17:24:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60016 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60016 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60016 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60016 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60016 ']' 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:36.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60016) - No such process 00:27:36.214 ERROR: process (pid: 60016) is no longer running 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:36.214 00:27:36.214 real 0m4.793s 00:27:36.214 user 0m4.767s 00:27:36.214 sys 0m0.771s 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.214 17:24:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:36.214 ************************************ 00:27:36.214 END TEST default_locks 00:27:36.214 ************************************ 00:27:36.214 17:24:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:27:36.214 17:24:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:36.214 17:24:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.214 17:24:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:36.214 ************************************ 00:27:36.214 START TEST default_locks_via_rpc 00:27:36.214 ************************************ 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60105 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60105 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60105 ']' 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.214 17:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:36.214 [2024-11-26 17:24:13.229705] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:36.214 [2024-11-26 17:24:13.229866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60105 ] 00:27:36.214 [2024-11-26 17:24:13.414403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.214 [2024-11-26 17:24:13.551534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60105 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60105 00:27:37.162 17:24:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60105 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60105 ']' 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60105 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60105 00:27:37.731 killing process with pid 60105 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60105' 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60105 00:27:37.731 17:24:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60105 00:27:41.021 00:27:41.021 real 0m4.671s 00:27:41.021 user 0m4.664s 00:27:41.021 sys 0m0.733s 00:27:41.021 17:24:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.021 17:24:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 ************************************ 00:27:41.021 END TEST default_locks_via_rpc 00:27:41.021 ************************************ 00:27:41.021 17:24:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:27:41.021 17:24:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:41.021 17:24:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:41.021 17:24:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 ************************************ 00:27:41.021 START TEST non_locking_app_on_locked_coremask 00:27:41.021 ************************************ 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60179 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60179 /var/tmp/spdk.sock 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60179 ']' 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.021 17:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 [2024-11-26 17:24:17.943464] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:41.021 [2024-11-26 17:24:17.943627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60179 ] 00:27:41.021 [2024-11-26 17:24:18.124280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.021 [2024-11-26 17:24:18.256928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60201 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60201 /var/tmp/spdk2.sock 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60201 ']' 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.957 17:24:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:41.957 [2024-11-26 17:24:19.374162] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:41.957 [2024-11-26 17:24:19.375076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:27:42.215 [2024-11-26 17:24:19.582978] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:42.215 [2024-11-26 17:24:19.583076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.474 [2024-11-26 17:24:19.853094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.061 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.061 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:45.061 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60179 00:27:45.061 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60179 00:27:45.061 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60179 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60179 ']' 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60179 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60179 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.629 killing process with pid 60179 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60179' 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60179 00:27:45.629 17:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60179 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60201 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60201 ']' 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60201 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60201 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.194 killing process with pid 60201 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60201' 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60201 00:27:52.194 17:24:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60201 00:27:54.103 00:27:54.103 real 0m13.630s 00:27:54.103 user 0m13.997s 00:27:54.103 sys 0m1.481s 00:27:54.103 17:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.103 17:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 ************************************ 00:27:54.103 END TEST non_locking_app_on_locked_coremask 00:27:54.103 ************************************ 00:27:54.104 17:24:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:27:54.104 17:24:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:54.104 17:24:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.104 17:24:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:54.104 ************************************ 00:27:54.104 START TEST locking_app_on_unlocked_coremask 00:27:54.104 ************************************ 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60365 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60365 /var/tmp/spdk.sock 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60365 ']' 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.104 17:24:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:54.398 [2024-11-26 17:24:31.679679] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:54.398 [2024-11-26 17:24:31.679855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:27:54.684 [2024-11-26 17:24:31.868105] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:54.684 [2024-11-26 17:24:31.868188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.684 [2024-11-26 17:24:32.001907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60387 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60387 /var/tmp/spdk2.sock 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60387 ']' 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.623 17:24:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:55.883 [2024-11-26 17:24:33.115847] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:55.883 [2024-11-26 17:24:33.115997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:27:55.883 [2024-11-26 17:24:33.306536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.142 [2024-11-26 17:24:33.574594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.679 17:24:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.679 17:24:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:58.679 17:24:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60387 00:27:58.679 17:24:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60387 00:27:58.679 17:24:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60365 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60365 ']' 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60365 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60365 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60365' 00:27:58.940 killing process with pid 60365 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60365 00:27:58.940 17:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60365 00:28:05.511 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60387 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60387 ']' 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60387 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60387 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60387' 00:28:05.512 killing process with pid 60387 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60387 00:28:05.512 17:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60387 00:28:06.885 00:28:06.885 real 0m12.775s 00:28:06.885 user 0m13.110s 00:28:06.885 sys 0m1.330s 00:28:06.885 17:24:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.885 17:24:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:06.885 ************************************ 00:28:06.885 END TEST locking_app_on_unlocked_coremask 00:28:06.885 ************************************ 00:28:07.144 17:24:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:28:07.144 17:24:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.144 17:24:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.144 17:24:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:07.144 ************************************ 00:28:07.144 START TEST locking_app_on_locked_coremask 00:28:07.144 ************************************ 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60549 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60549 /var/tmp/spdk.sock 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60549 ']' 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.144 17:24:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:07.144 [2024-11-26 17:24:44.463348] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:07.144 [2024-11-26 17:24:44.463475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60549 ] 00:28:07.402 [2024-11-26 17:24:44.630556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.402 [2024-11-26 17:24:44.747097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60565 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60565 /var/tmp/spdk2.sock 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60565 /var/tmp/spdk2.sock 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60565 /var/tmp/spdk2.sock 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60565 ']' 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.343 17:24:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:08.343 [2024-11-26 17:24:45.777583] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:08.343 [2024-11-26 17:24:45.778212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 00:28:08.658 [2024-11-26 17:24:45.966065] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60549 has claimed it. 00:28:08.658 [2024-11-26 17:24:45.966135] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:09.227 ERROR: process (pid: 60565) is no longer running 00:28:09.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60565) - No such process 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60549 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60549 00:28:09.227 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60549 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60549 ']' 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60549 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60549 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.488 killing process with pid 60549 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60549' 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60549 00:28:09.488 17:24:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60549 00:28:12.782 00:28:12.782 real 0m5.252s 00:28:12.782 user 0m5.480s 00:28:12.782 sys 0m0.864s 00:28:12.782 17:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.782 17:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:12.782 ************************************ 00:28:12.782 END TEST locking_app_on_locked_coremask 00:28:12.782 ************************************ 00:28:12.782 17:24:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:28:12.782 17:24:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.782 17:24:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.782 17:24:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:12.782 ************************************ 00:28:12.782 START TEST locking_overlapped_coremask 00:28:12.782 ************************************ 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60640 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60640 /var/tmp/spdk.sock 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60640 ']' 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.782 17:24:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:12.782 [2024-11-26 17:24:49.790574] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:12.782 [2024-11-26 17:24:49.790747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:28:12.782 [2024-11-26 17:24:49.963801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.782 [2024-11-26 17:24:50.100962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.782 [2024-11-26 17:24:50.101002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.782 [2024-11-26 17:24:50.101042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.719 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.719 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:13.719 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60664 00:28:13.719 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:28:13.719 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60664 /var/tmp/spdk2.sock 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60664 /var/tmp/spdk2.sock 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60664 /var/tmp/spdk2.sock 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60664 ']' 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:13.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.720 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:13.979 [2024-11-26 17:24:51.230776] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:13.979 [2024-11-26 17:24:51.230918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60664 ] 00:28:14.239 [2024-11-26 17:24:51.427350] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60640 has claimed it. 00:28:14.239 [2024-11-26 17:24:51.431688] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:14.499 ERROR: process (pid: 60664) is no longer running 00:28:14.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60664) - No such process 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60640 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60640 ']' 00:28:14.499 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60640 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60640 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60640' 00:28:14.500 killing process with pid 60640 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60640 00:28:14.500 17:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60640 00:28:17.828 00:28:17.828 real 0m5.141s 00:28:17.828 user 0m14.054s 00:28:17.828 sys 0m0.673s 00:28:17.828 17:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.828 17:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:17.828 ************************************ 00:28:17.828 END TEST locking_overlapped_coremask 00:28:17.828 ************************************ 00:28:17.828 17:24:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:28:17.828 17:24:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:17.828 17:24:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.828 17:24:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:17.828 ************************************ 00:28:17.829 START TEST locking_overlapped_coremask_via_rpc 00:28:17.829 ************************************ 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60733 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60733 /var/tmp/spdk.sock 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60733 ']' 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.829 17:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:17.829 [2024-11-26 17:24:55.000485] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:17.829 [2024-11-26 17:24:55.000663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60733 ] 00:28:17.829 [2024-11-26 17:24:55.189767] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:17.829 [2024-11-26 17:24:55.189838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:18.088 [2024-11-26 17:24:55.326956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.088 [2024-11-26 17:24:55.326980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.088 [2024-11-26 17:24:55.326987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60757 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60757 /var/tmp/spdk2.sock 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60757 ']' 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.026 17:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:19.285 [2024-11-26 17:24:56.507030] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:19.285 [2024-11-26 17:24:56.507294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60757 ] 00:28:19.285 [2024-11-26 17:24:56.708261] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:19.285 [2024-11-26 17:24:56.708540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:19.882 [2024-11-26 17:24:56.996712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.882 [2024-11-26 17:24:56.999812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.882 [2024-11-26 17:24:56.999825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:21.783 [2024-11-26 17:24:59.179916] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60733 has claimed it. 00:28:21.783 request: 00:28:21.783 { 00:28:21.783 "method": "framework_enable_cpumask_locks", 00:28:21.783 "req_id": 1 00:28:21.783 } 00:28:21.783 Got JSON-RPC error response 00:28:21.783 response: 00:28:21.783 { 00:28:21.783 "code": -32603, 00:28:21.783 "message": "Failed to claim CPU core: 2" 00:28:21.783 } 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:28:21.783 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60733 /var/tmp/spdk.sock 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60733 ']' 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.784 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60757 /var/tmp/spdk2.sock 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60757 ']' 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.042 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:22.300 00:28:22.300 real 0m4.815s 00:28:22.300 user 0m1.590s 00:28:22.300 sys 0m0.222s 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.300 17:24:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:22.301 ************************************ 00:28:22.301 END TEST locking_overlapped_coremask_via_rpc 00:28:22.301 ************************************ 00:28:22.301 17:24:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:28:22.301 17:24:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60733 ]] 00:28:22.301 17:24:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60733 00:28:22.301 17:24:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60733 ']' 00:28:22.301 17:24:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60733 00:28:22.301 17:24:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:28:22.301 17:24:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.301 17:24:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60733 00:28:22.559 17:24:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.559 killing process with pid 60733 00:28:22.559 17:24:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.559 17:24:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60733' 00:28:22.559 17:24:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60733 00:28:22.559 17:24:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60733 00:28:25.845 17:25:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60757 ]] 00:28:25.845 17:25:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60757 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60757 ']' 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60757 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60757 00:28:25.845 killing process with pid 60757 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60757' 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60757 00:28:25.845 17:25:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60757 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60733 ]] 00:28:28.383 Process with pid 60733 is not found 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60733 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60733 ']' 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60733 00:28:28.383 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60733) - No such process 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60733 is not found' 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60757 ]] 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60757 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60757 ']' 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60757 00:28:28.383 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60757) - No such process 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60757 is not found' 00:28:28.383 Process with pid 60757 is not found 00:28:28.383 17:25:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:28.383 00:28:28.383 real 0m57.539s 00:28:28.383 user 1m38.161s 00:28:28.383 sys 0m7.419s 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.383 17:25:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:28.383 ************************************ 00:28:28.383 END TEST cpu_locks 00:28:28.383 ************************************ 00:28:28.383 00:28:28.383 real 1m29.970s 00:28:28.383 user 2m43.832s 00:28:28.383 sys 0m11.733s 00:28:28.383 17:25:05 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.383 17:25:05 event -- common/autotest_common.sh@10 -- # set +x 00:28:28.383 ************************************ 00:28:28.383 END TEST event 00:28:28.383 ************************************ 00:28:28.383 17:25:05 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:28.383 17:25:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.383 17:25:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.383 17:25:05 -- common/autotest_common.sh@10 -- # set +x 00:28:28.383 ************************************ 00:28:28.383 START TEST thread 00:28:28.383 ************************************ 00:28:28.383 17:25:05 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:28.383 * Looking for test storage... 00:28:28.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:28:28.383 17:25:05 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:28.383 17:25:05 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:28:28.383 17:25:05 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:28.643 17:25:05 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.643 17:25:05 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.643 17:25:05 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.643 17:25:05 thread -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.643 17:25:05 thread -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.643 17:25:05 thread -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.643 17:25:05 thread -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.643 17:25:05 thread -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.643 17:25:05 thread -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.643 17:25:05 thread -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.643 17:25:05 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.643 17:25:05 thread -- scripts/common.sh@344 -- # case "$op" in 00:28:28.643 17:25:05 thread -- scripts/common.sh@345 -- # : 1 00:28:28.643 17:25:05 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.643 17:25:05 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.643 17:25:05 thread -- scripts/common.sh@365 -- # decimal 1 00:28:28.643 17:25:05 thread -- scripts/common.sh@353 -- # local d=1 00:28:28.643 17:25:05 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.643 17:25:05 thread -- scripts/common.sh@355 -- # echo 1 00:28:28.643 17:25:05 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.643 17:25:05 thread -- scripts/common.sh@366 -- # decimal 2 00:28:28.643 17:25:05 thread -- scripts/common.sh@353 -- # local d=2 00:28:28.643 17:25:05 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.643 17:25:05 thread -- scripts/common.sh@355 -- # echo 2 00:28:28.643 17:25:05 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.643 17:25:05 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.643 17:25:05 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.643 17:25:05 thread -- scripts/common.sh@368 -- # return 0 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.643 --rc genhtml_branch_coverage=1 00:28:28.643 --rc genhtml_function_coverage=1 00:28:28.643 --rc genhtml_legend=1 00:28:28.643 --rc geninfo_all_blocks=1 00:28:28.643 --rc geninfo_unexecuted_blocks=1 00:28:28.643 00:28:28.643 ' 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.643 --rc genhtml_branch_coverage=1 00:28:28.643 --rc genhtml_function_coverage=1 00:28:28.643 --rc genhtml_legend=1 00:28:28.643 --rc geninfo_all_blocks=1 00:28:28.643 --rc geninfo_unexecuted_blocks=1 00:28:28.643 00:28:28.643 ' 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.643 --rc genhtml_branch_coverage=1 00:28:28.643 --rc genhtml_function_coverage=1 00:28:28.643 --rc genhtml_legend=1 00:28:28.643 --rc geninfo_all_blocks=1 00:28:28.643 --rc geninfo_unexecuted_blocks=1 00:28:28.643 00:28:28.643 ' 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.643 --rc genhtml_branch_coverage=1 00:28:28.643 --rc genhtml_function_coverage=1 00:28:28.643 --rc genhtml_legend=1 00:28:28.643 --rc geninfo_all_blocks=1 00:28:28.643 --rc geninfo_unexecuted_blocks=1 00:28:28.643 00:28:28.643 ' 00:28:28.643 17:25:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:28:28.643 17:25:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.644 17:25:05 thread -- common/autotest_common.sh@10 -- # set +x 00:28:28.644 ************************************ 00:28:28.644 START TEST thread_poller_perf 00:28:28.644 ************************************ 00:28:28.644 17:25:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:28.644 [2024-11-26 17:25:05.919152] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:28.644 [2024-11-26 17:25:05.919453] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60963 ] 00:28:28.904 [2024-11-26 17:25:06.093079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.904 [2024-11-26 17:25:06.234415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.904 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:28:30.296 [2024-11-26T17:25:07.742Z] ====================================== 00:28:30.296 [2024-11-26T17:25:07.742Z] busy:2302826124 (cyc) 00:28:30.296 [2024-11-26T17:25:07.742Z] total_run_count: 311000 00:28:30.296 [2024-11-26T17:25:07.742Z] tsc_hz: 2290000000 (cyc) 00:28:30.296 [2024-11-26T17:25:07.742Z] ====================================== 00:28:30.296 [2024-11-26T17:25:07.742Z] poller_cost: 7404 (cyc), 3233 (nsec) 00:28:30.296 00:28:30.296 real 0m1.630s 00:28:30.296 user 0m1.442s 00:28:30.296 sys 0m0.079s 00:28:30.296 17:25:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.296 17:25:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:30.296 ************************************ 00:28:30.296 END TEST thread_poller_perf 00:28:30.296 ************************************ 00:28:30.296 17:25:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:30.296 17:25:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:28:30.296 17:25:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.296 17:25:07 thread -- common/autotest_common.sh@10 -- # set +x 00:28:30.296 ************************************ 00:28:30.296 START TEST thread_poller_perf 00:28:30.296 ************************************ 00:28:30.296 17:25:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:30.296 [2024-11-26 17:25:07.609772] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:30.296 [2024-11-26 17:25:07.609966] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:28:30.555 [2024-11-26 17:25:07.777743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.555 [2024-11-26 17:25:07.899580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.555 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:28:31.941 [2024-11-26T17:25:09.387Z] ====================================== 00:28:31.941 [2024-11-26T17:25:09.387Z] busy:2293824626 (cyc) 00:28:31.941 [2024-11-26T17:25:09.387Z] total_run_count: 4361000 00:28:31.941 [2024-11-26T17:25:09.387Z] tsc_hz: 2290000000 (cyc) 00:28:31.941 [2024-11-26T17:25:09.387Z] ====================================== 00:28:31.941 [2024-11-26T17:25:09.387Z] poller_cost: 525 (cyc), 229 (nsec) 00:28:31.941 00:28:31.941 real 0m1.603s 00:28:31.941 user 0m1.404s 00:28:31.941 sys 0m0.090s 00:28:31.941 17:25:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.941 17:25:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:31.941 ************************************ 00:28:31.941 END TEST thread_poller_perf 00:28:31.941 ************************************ 00:28:31.941 17:25:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:28:31.941 00:28:31.941 real 0m3.535s 00:28:31.941 user 0m2.993s 00:28:31.941 sys 0m0.341s 00:28:31.941 17:25:09 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.941 17:25:09 thread -- common/autotest_common.sh@10 -- # set +x 00:28:31.941 ************************************ 00:28:31.941 END TEST thread 00:28:31.941 ************************************ 00:28:31.941 17:25:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:28:31.941 17:25:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:31.941 17:25:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:31.941 17:25:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.941 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:28:31.941 ************************************ 00:28:31.941 START TEST app_cmdline 00:28:31.941 ************************************ 00:28:31.941 17:25:09 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:31.941 * Looking for test storage... 00:28:31.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:31.941 17:25:09 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:31.941 17:25:09 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.941 17:25:09 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.202 17:25:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:32.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.202 --rc genhtml_branch_coverage=1 00:28:32.202 --rc genhtml_function_coverage=1 00:28:32.202 --rc genhtml_legend=1 00:28:32.202 --rc geninfo_all_blocks=1 00:28:32.202 --rc geninfo_unexecuted_blocks=1 00:28:32.202 00:28:32.202 ' 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:32.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.202 --rc genhtml_branch_coverage=1 00:28:32.202 --rc genhtml_function_coverage=1 00:28:32.202 --rc genhtml_legend=1 00:28:32.202 --rc geninfo_all_blocks=1 00:28:32.202 --rc geninfo_unexecuted_blocks=1 00:28:32.202 00:28:32.202 ' 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:32.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.202 --rc genhtml_branch_coverage=1 00:28:32.202 --rc genhtml_function_coverage=1 00:28:32.202 --rc genhtml_legend=1 00:28:32.202 --rc geninfo_all_blocks=1 00:28:32.202 --rc geninfo_unexecuted_blocks=1 00:28:32.202 00:28:32.202 ' 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:32.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.202 --rc genhtml_branch_coverage=1 00:28:32.202 --rc genhtml_function_coverage=1 00:28:32.202 --rc genhtml_legend=1 00:28:32.202 --rc geninfo_all_blocks=1 00:28:32.202 --rc geninfo_unexecuted_blocks=1 00:28:32.202 00:28:32.202 ' 00:28:32.202 17:25:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:28:32.202 17:25:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:28:32.202 17:25:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61088 00:28:32.202 17:25:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61088 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61088 ']' 00:28:32.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.202 17:25:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:32.202 [2024-11-26 17:25:09.585212] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:32.202 [2024-11-26 17:25:09.585892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61088 ] 00:28:32.462 [2024-11-26 17:25:09.771363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.724 [2024-11-26 17:25:09.912369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.665 17:25:10 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.665 17:25:10 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:28:33.665 17:25:10 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:28:33.923 { 00:28:33.923 "version": "SPDK v25.01-pre git sha1 f7ce15267", 00:28:33.923 "fields": { 00:28:33.923 "major": 25, 00:28:33.923 "minor": 1, 00:28:33.923 "patch": 0, 00:28:33.923 "suffix": "-pre", 00:28:33.923 "commit": "f7ce15267" 00:28:33.923 } 00:28:33.923 } 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:28:33.923 17:25:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:33.923 17:25:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:34.182 request: 00:28:34.182 { 00:28:34.182 "method": "env_dpdk_get_mem_stats", 00:28:34.182 "req_id": 1 00:28:34.182 } 00:28:34.182 Got JSON-RPC error response 00:28:34.182 response: 00:28:34.182 { 00:28:34.182 "code": -32601, 00:28:34.182 "message": "Method not found" 00:28:34.182 } 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:34.182 17:25:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61088 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61088 ']' 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61088 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61088 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61088' 00:28:34.182 killing process with pid 61088 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@973 -- # kill 61088 00:28:34.182 17:25:11 app_cmdline -- common/autotest_common.sh@978 -- # wait 61088 00:28:37.466 00:28:37.467 real 0m4.926s 00:28:37.467 user 0m5.244s 00:28:37.467 sys 0m0.688s 00:28:37.467 17:25:14 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.467 ************************************ 00:28:37.467 END TEST app_cmdline 00:28:37.467 ************************************ 00:28:37.467 17:25:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:37.467 17:25:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:37.467 17:25:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.467 17:25:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.467 17:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.467 ************************************ 00:28:37.467 START TEST version 00:28:37.467 ************************************ 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:37.467 * Looking for test storage... 00:28:37.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.467 17:25:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.467 17:25:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.467 17:25:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.467 17:25:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.467 17:25:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.467 17:25:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.467 17:25:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.467 17:25:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.467 17:25:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.467 17:25:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.467 17:25:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.467 17:25:14 version -- scripts/common.sh@344 -- # case "$op" in 00:28:37.467 17:25:14 version -- scripts/common.sh@345 -- # : 1 00:28:37.467 17:25:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.467 17:25:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.467 17:25:14 version -- scripts/common.sh@365 -- # decimal 1 00:28:37.467 17:25:14 version -- scripts/common.sh@353 -- # local d=1 00:28:37.467 17:25:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.467 17:25:14 version -- scripts/common.sh@355 -- # echo 1 00:28:37.467 17:25:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.467 17:25:14 version -- scripts/common.sh@366 -- # decimal 2 00:28:37.467 17:25:14 version -- scripts/common.sh@353 -- # local d=2 00:28:37.467 17:25:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.467 17:25:14 version -- scripts/common.sh@355 -- # echo 2 00:28:37.467 17:25:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.467 17:25:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.467 17:25:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.467 17:25:14 version -- scripts/common.sh@368 -- # return 0 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.467 --rc genhtml_branch_coverage=1 00:28:37.467 --rc genhtml_function_coverage=1 00:28:37.467 --rc genhtml_legend=1 00:28:37.467 --rc geninfo_all_blocks=1 00:28:37.467 --rc geninfo_unexecuted_blocks=1 00:28:37.467 00:28:37.467 ' 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.467 --rc genhtml_branch_coverage=1 00:28:37.467 --rc genhtml_function_coverage=1 00:28:37.467 --rc genhtml_legend=1 00:28:37.467 --rc geninfo_all_blocks=1 00:28:37.467 --rc geninfo_unexecuted_blocks=1 00:28:37.467 00:28:37.467 ' 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.467 --rc genhtml_branch_coverage=1 00:28:37.467 --rc genhtml_function_coverage=1 00:28:37.467 --rc genhtml_legend=1 00:28:37.467 --rc geninfo_all_blocks=1 00:28:37.467 --rc geninfo_unexecuted_blocks=1 00:28:37.467 00:28:37.467 ' 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.467 --rc genhtml_branch_coverage=1 00:28:37.467 --rc genhtml_function_coverage=1 00:28:37.467 --rc genhtml_legend=1 00:28:37.467 --rc geninfo_all_blocks=1 00:28:37.467 --rc geninfo_unexecuted_blocks=1 00:28:37.467 00:28:37.467 ' 00:28:37.467 17:25:14 version -- app/version.sh@17 -- # get_header_version major 00:28:37.467 17:25:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # cut -f2 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # tr -d '"' 00:28:37.467 17:25:14 version -- app/version.sh@17 -- # major=25 00:28:37.467 17:25:14 version -- app/version.sh@18 -- # get_header_version minor 00:28:37.467 17:25:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # cut -f2 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # tr -d '"' 00:28:37.467 17:25:14 version -- app/version.sh@18 -- # minor=1 00:28:37.467 17:25:14 version -- app/version.sh@19 -- # get_header_version patch 00:28:37.467 17:25:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # cut -f2 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # tr -d '"' 00:28:37.467 17:25:14 version -- app/version.sh@19 -- # patch=0 00:28:37.467 17:25:14 version -- app/version.sh@20 -- # get_header_version suffix 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # cut -f2 00:28:37.467 17:25:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:37.467 17:25:14 version -- app/version.sh@14 -- # tr -d '"' 00:28:37.467 17:25:14 version -- app/version.sh@20 -- # suffix=-pre 00:28:37.467 17:25:14 version -- app/version.sh@22 -- # version=25.1 00:28:37.467 17:25:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:28:37.467 17:25:14 version -- app/version.sh@28 -- # version=25.1rc0 00:28:37.467 17:25:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:37.467 17:25:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:28:37.467 17:25:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:28:37.467 17:25:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:28:37.467 ************************************ 00:28:37.467 END TEST version 00:28:37.467 ************************************ 00:28:37.467 00:28:37.467 real 0m0.300s 00:28:37.467 user 0m0.184s 00:28:37.467 sys 0m0.169s 00:28:37.467 17:25:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.467 17:25:14 version -- common/autotest_common.sh@10 -- # set +x 00:28:37.467 17:25:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:28:37.467 17:25:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:28:37.467 17:25:14 -- spdk/autotest.sh@194 -- # uname -s 00:28:37.467 17:25:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:28:37.467 17:25:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:37.467 17:25:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:37.467 17:25:14 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:28:37.467 17:25:14 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:37.467 17:25:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.467 17:25:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.467 17:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.467 ************************************ 00:28:37.467 START TEST blockdev_nvme 00:28:37.467 ************************************ 00:28:37.467 17:25:14 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:37.467 * Looking for test storage... 00:28:37.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:37.467 17:25:14 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.467 17:25:14 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.467 17:25:14 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.467 17:25:14 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.467 17:25:14 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.468 17:25:14 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.468 --rc genhtml_branch_coverage=1 00:28:37.468 --rc genhtml_function_coverage=1 00:28:37.468 --rc genhtml_legend=1 00:28:37.468 --rc geninfo_all_blocks=1 00:28:37.468 --rc geninfo_unexecuted_blocks=1 00:28:37.468 00:28:37.468 ' 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.468 --rc genhtml_branch_coverage=1 00:28:37.468 --rc genhtml_function_coverage=1 00:28:37.468 --rc genhtml_legend=1 00:28:37.468 --rc geninfo_all_blocks=1 00:28:37.468 --rc geninfo_unexecuted_blocks=1 00:28:37.468 00:28:37.468 ' 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.468 --rc genhtml_branch_coverage=1 00:28:37.468 --rc genhtml_function_coverage=1 00:28:37.468 --rc genhtml_legend=1 00:28:37.468 --rc geninfo_all_blocks=1 00:28:37.468 --rc geninfo_unexecuted_blocks=1 00:28:37.468 00:28:37.468 ' 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.468 --rc genhtml_branch_coverage=1 00:28:37.468 --rc genhtml_function_coverage=1 00:28:37.468 --rc genhtml_legend=1 00:28:37.468 --rc geninfo_all_blocks=1 00:28:37.468 --rc geninfo_unexecuted_blocks=1 00:28:37.468 00:28:37.468 ' 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:37.468 17:25:14 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61289 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:37.468 17:25:14 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61289 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61289 ']' 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.468 17:25:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:37.726 [2024-11-26 17:25:14.949283] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:37.726 [2024-11-26 17:25:14.949504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:28:37.726 [2024-11-26 17:25:15.113104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.984 [2024-11-26 17:25:15.237893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.920 17:25:16 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.920 17:25:16 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:28:38.920 17:25:16 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:28:38.920 17:25:16 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:28:38.920 17:25:16 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:28:38.920 17:25:16 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:28:38.920 17:25:16 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:38.920 17:25:16 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:28:38.920 17:25:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.920 17:25:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:39.181 17:25:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.181 17:25:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:28:39.181 17:25:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.181 17:25:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5777060b-574a-4186-92f4-2f6a08375687"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5777060b-574a-4186-92f4-2f6a08375687",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c8e9388e-e072-4b84-974c-bd5b9e08c3d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c8e9388e-e072-4b84-974c-bd5b9e08c3d9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "681795cb-d3ad-40e5-865e-2aae65eb848b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "681795cb-d3ad-40e5-865e-2aae65eb848b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c24584b1-e1aa-4397-aa5e-44bb349d0664"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c24584b1-e1aa-4397-aa5e-44bb349d0664",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fd9177c2-50aa-42ff-aaa2-d268cf472a65"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd9177c2-50aa-42ff-aaa2-d268cf472a65",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9b0a6691-6a89-4076-9263-704ac91690ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9b0a6691-6a89-4076-9263-704ac91690ff",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:28:39.442 17:25:16 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61289 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61289 ']' 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61289 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61289 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61289' 00:28:39.442 killing process with pid 61289 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61289 00:28:39.442 17:25:16 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61289 00:28:42.742 17:25:19 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:42.742 17:25:19 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:42.742 17:25:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:42.742 17:25:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.742 17:25:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:42.742 ************************************ 00:28:42.742 START TEST bdev_hello_world 00:28:42.742 ************************************ 00:28:42.742 17:25:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:42.742 [2024-11-26 17:25:19.808645] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:42.742 [2024-11-26 17:25:19.808876] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61384 ] 00:28:42.742 [2024-11-26 17:25:19.988664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.742 [2024-11-26 17:25:20.149803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.700 [2024-11-26 17:25:20.917886] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:43.700 [2024-11-26 17:25:20.917947] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:43.700 [2024-11-26 17:25:20.917979] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:43.700 [2024-11-26 17:25:20.921700] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:43.700 [2024-11-26 17:25:20.922211] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:43.700 [2024-11-26 17:25:20.922248] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:43.700 [2024-11-26 17:25:20.922409] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:43.700 00:28:43.700 [2024-11-26 17:25:20.922434] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:45.077 00:28:45.077 real 0m2.599s 00:28:45.077 user 0m2.134s 00:28:45.077 sys 0m0.352s 00:28:45.077 17:25:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.077 ************************************ 00:28:45.077 END TEST bdev_hello_world 00:28:45.077 ************************************ 00:28:45.077 17:25:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:45.077 17:25:22 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:28:45.077 17:25:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:45.077 17:25:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.077 17:25:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:45.077 ************************************ 00:28:45.077 START TEST bdev_bounds 00:28:45.077 ************************************ 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61437 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61437' 00:28:45.077 Process bdevio pid: 61437 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61437 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61437 ']' 00:28:45.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.077 17:25:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:45.077 [2024-11-26 17:25:22.479937] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:45.077 [2024-11-26 17:25:22.480100] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61437 ] 00:28:45.336 [2024-11-26 17:25:22.649321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.594 [2024-11-26 17:25:22.807874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.594 [2024-11-26 17:25:22.808027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.594 [2024-11-26 17:25:22.808065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.530 17:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.530 17:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:28:46.530 17:25:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:46.530 I/O targets: 00:28:46.530 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:28:46.530 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:28:46.530 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:46.530 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:46.530 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:46.530 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:28:46.530 00:28:46.530 00:28:46.530 CUnit - A unit testing framework for C - Version 2.1-3 00:28:46.530 http://cunit.sourceforge.net/ 00:28:46.530 00:28:46.530 00:28:46.530 Suite: bdevio tests on: Nvme3n1 00:28:46.530 Test: blockdev write read block ...passed 00:28:46.530 Test: blockdev write zeroes read block ...passed 00:28:46.530 Test: blockdev write zeroes read no split ...passed 00:28:46.530 Test: blockdev write zeroes read split ...passed 00:28:46.530 Test: blockdev write zeroes read split partial ...passed 00:28:46.530 Test: blockdev reset ...[2024-11-26 17:25:23.838590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:28:46.530 [2024-11-26 17:25:23.843159] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:28:46.530 passed 00:28:46.530 Test: blockdev write read 8 blocks ...passed 00:28:46.530 Test: blockdev write read size > 128k ...passed 00:28:46.530 Test: blockdev write read invalid size ...passed 00:28:46.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:46.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:46.531 Test: blockdev write read max offset ...passed 00:28:46.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:46.531 Test: blockdev writev readv 8 blocks ...passed 00:28:46.531 Test: blockdev writev readv 30 x 1block ...passed 00:28:46.531 Test: blockdev writev readv block ...passed 00:28:46.531 Test: blockdev writev readv size > 128k ...passed 00:28:46.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:46.531 Test: blockdev comparev and writev ...[2024-11-26 17:25:23.852551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0e0a000 len:0x1000 00:28:46.531 [2024-11-26 17:25:23.852745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:46.531 passed 00:28:46.531 Test: blockdev nvme passthru rw ...passed 00:28:46.531 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:25:23.853638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:46.531 [2024-11-26 17:25:23.853757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:46.531 passed 00:28:46.531 Test: blockdev nvme admin passthru ...passed 00:28:46.531 Test: blockdev copy ...passed 00:28:46.531 Suite: bdevio tests on: Nvme2n3 00:28:46.531 Test: blockdev write read block ...passed 00:28:46.531 Test: blockdev write zeroes read block ...passed 00:28:46.531 Test: blockdev write zeroes read no split ...passed 00:28:46.531 Test: blockdev write zeroes read split ...passed 00:28:46.531 Test: blockdev write zeroes read split partial ...passed 00:28:46.531 Test: blockdev reset ...[2024-11-26 17:25:23.946324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:28:46.531 [2024-11-26 17:25:23.950952] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:28:46.531 00:28:46.531 Test: blockdev write read 8 blocks ...passed 00:28:46.531 Test: blockdev write read size > 128k ...passed 00:28:46.531 Test: blockdev write read invalid size ...passed 00:28:46.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:46.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:46.531 Test: blockdev write read max offset ...passed 00:28:46.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:46.531 Test: blockdev writev readv 8 blocks ...passed 00:28:46.531 Test: blockdev writev readv 30 x 1block ...passed 00:28:46.531 Test: blockdev writev readv block ...passed 00:28:46.531 Test: blockdev writev readv size > 128k ...passed 00:28:46.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:46.531 Test: blockdev comparev and writev ...[2024-11-26 17:25:23.960904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293806000 len:0x1000 00:28:46.531 [2024-11-26 17:25:23.961040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:46.531 passed 00:28:46.531 Test: blockdev nvme passthru rw ...passed 00:28:46.531 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:25:23.962010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:46.531 [2024-11-26 17:25:23.962121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:46.531 passed 00:28:46.531 Test: blockdev nvme admin passthru ...passed 00:28:46.531 Test: blockdev copy ...passed 00:28:46.531 Suite: bdevio tests on: Nvme2n2 00:28:46.531 Test: blockdev write read block ...passed 00:28:46.531 Test: blockdev write zeroes read block ...passed 00:28:46.790 Test: blockdev write zeroes read no split ...passed 00:28:46.790 Test: blockdev write zeroes read split ...passed 00:28:46.790 Test: blockdev write zeroes read split partial ...passed 00:28:46.790 Test: blockdev reset ...[2024-11-26 17:25:24.057185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:28:46.790 [2024-11-26 17:25:24.062136] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:28:46.790 00:28:46.790 Test: blockdev write read 8 blocks ...passed 00:28:46.790 Test: blockdev write read size > 128k ...passed 00:28:46.790 Test: blockdev write read invalid size ...passed 00:28:46.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:46.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:46.790 Test: blockdev write read max offset ...passed 00:28:46.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:46.790 Test: blockdev writev readv 8 blocks ...passed 00:28:46.790 Test: blockdev writev readv 30 x 1block ...passed 00:28:46.790 Test: blockdev writev readv block ...passed 00:28:46.790 Test: blockdev writev readv size > 128k ...passed 00:28:46.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:46.790 Test: blockdev comparev and writev ...[2024-11-26 17:25:24.074965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e3c000 len:0x1000 00:28:46.790 [2024-11-26 17:25:24.075176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:46.790 passed 00:28:46.790 Test: blockdev nvme passthru rw ...passed 00:28:46.790 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:25:24.076486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:28:46.790 Test: blockdev nvme admin passthru ...passed 00:28:46.790 Test: blockdev copy ... cid:190 PRP1 0x0 PRP2 0x0 00:28:46.790 [2024-11-26 17:25:24.076582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:46.790 passed 00:28:46.790 Suite: bdevio tests on: Nvme2n1 00:28:46.790 Test: blockdev write read block ...passed 00:28:46.790 Test: blockdev write zeroes read block ...passed 00:28:46.790 Test: blockdev write zeroes read no split ...passed 00:28:46.790 Test: blockdev write zeroes read split ...passed 00:28:46.790 Test: blockdev write zeroes read split partial ...passed 00:28:46.790 Test: blockdev reset ...[2024-11-26 17:25:24.179509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:28:46.790 [2024-11-26 17:25:24.184001] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:28:46.790 passed 00:28:46.790 Test: blockdev write read 8 blocks ...passed 00:28:46.790 Test: blockdev write read size > 128k ...passed 00:28:46.790 Test: blockdev write read invalid size ...passed 00:28:46.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:46.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:46.790 Test: blockdev write read max offset ...passed 00:28:46.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:46.790 Test: blockdev writev readv 8 blocks ...passed 00:28:46.790 Test: blockdev writev readv 30 x 1block ...passed 00:28:46.790 Test: blockdev writev readv block ...passed 00:28:46.790 Test: blockdev writev readv size > 128k ...passed 00:28:46.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:46.790 Test: blockdev comparev and writev ...[2024-11-26 17:25:24.192985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e38000 len:0x1000 00:28:46.790 [2024-11-26 17:25:24.193140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:46.790 passed 00:28:46.790 Test: blockdev nvme passthru rw ...passed 00:28:46.790 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:25:24.194015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:46.790 [2024-11-26 17:25:24.194117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:46.790 passed 00:28:46.790 Test: blockdev nvme admin passthru ...passed 00:28:46.790 Test: blockdev copy ...passed 00:28:46.790 Suite: bdevio tests on: Nvme1n1 00:28:46.790 Test: blockdev write read block ...passed 00:28:46.790 Test: blockdev write zeroes read block ...passed 00:28:46.790 Test: blockdev write zeroes read no split ...passed 00:28:47.050 Test: blockdev write zeroes read split ...passed 00:28:47.050 Test: blockdev write zeroes read split partial ...passed 00:28:47.051 Test: blockdev reset ...[2024-11-26 17:25:24.293247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:28:47.051 [2024-11-26 17:25:24.297554] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:28:47.051 00:28:47.051 Test: blockdev write read 8 blocks ...passed 00:28:47.051 Test: blockdev write read size > 128k ...passed 00:28:47.051 Test: blockdev write read invalid size ...passed 00:28:47.051 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:47.051 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:47.051 Test: blockdev write read max offset ...passed 00:28:47.051 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:47.051 Test: blockdev writev readv 8 blocks ...passed 00:28:47.051 Test: blockdev writev readv 30 x 1block ...passed 00:28:47.051 Test: blockdev writev readv block ...passed 00:28:47.051 Test: blockdev writev readv size > 128k ...passed 00:28:47.051 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:47.051 Test: blockdev comparev and writev ...[2024-11-26 17:25:24.306280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e34000 len:0x1000 00:28:47.051 [2024-11-26 17:25:24.306448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:47.051 passed 00:28:47.051 Test: blockdev nvme passthru rw ...passed 00:28:47.051 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:25:24.307213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:47.051 [2024-11-26 17:25:24.307313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:47.051 passed 00:28:47.051 Test: blockdev nvme admin passthru ...passed 00:28:47.051 Test: blockdev copy ...passed 00:28:47.051 Suite: bdevio tests on: Nvme0n1 00:28:47.051 Test: blockdev write read block ...passed 00:28:47.051 Test: blockdev write zeroes read block ...passed 00:28:47.051 Test: blockdev write zeroes read no split ...passed 00:28:47.051 Test: blockdev write zeroes read split ...passed 00:28:47.051 Test: blockdev write zeroes read split partial ...passed 00:28:47.051 Test: blockdev reset ...[2024-11-26 17:25:24.409719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:28:47.051 [2024-11-26 17:25:24.414293] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:28:47.051 passed 00:28:47.051 Test: blockdev write read 8 blocks ...passed 00:28:47.051 Test: blockdev write read size > 128k ...passed 00:28:47.051 Test: blockdev write read invalid size ...passed 00:28:47.051 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:47.051 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:47.051 Test: blockdev write read max offset ...passed 00:28:47.051 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:47.051 Test: blockdev writev readv 8 blocks ...passed 00:28:47.051 Test: blockdev writev readv 30 x 1block ...passed 00:28:47.051 Test: blockdev writev readv block ...passed 00:28:47.051 Test: blockdev writev readv size > 128k ...passed 00:28:47.051 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:47.051 Test: blockdev comparev and writev ...passed 00:28:47.051 Test: blockdev nvme passthru rw ...[2024-11-26 17:25:24.422151] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:28:47.051 separate metadata which is not supported yet. 00:28:47.051 passed 00:28:47.051 Test: blockdev nvme passthru vendor specific ...passed 00:28:47.051 Test: blockdev nvme admin passthru ...[2024-11-26 17:25:24.422644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:28:47.051 [2024-11-26 17:25:24.422707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:28:47.051 passed 00:28:47.051 Test: blockdev copy ...passed 00:28:47.051 00:28:47.051 Run Summary: Type Total Ran Passed Failed Inactive 00:28:47.051 suites 6 6 n/a 0 0 00:28:47.051 tests 138 138 138 0 0 00:28:47.051 asserts 893 893 893 0 n/a 00:28:47.051 00:28:47.051 Elapsed time = 1.875 seconds 00:28:47.051 0 00:28:47.051 17:25:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61437 00:28:47.051 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61437 ']' 00:28:47.051 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61437 00:28:47.051 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:28:47.051 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.051 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61437 00:28:47.309 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:47.309 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:47.309 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61437' 00:28:47.309 killing process with pid 61437 00:28:47.309 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61437 00:28:47.309 17:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61437 00:28:48.684 17:25:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:48.684 00:28:48.684 real 0m3.411s 00:28:48.684 user 0m8.803s 00:28:48.684 sys 0m0.535s 00:28:48.684 17:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.684 17:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:48.684 ************************************ 00:28:48.684 END TEST bdev_bounds 00:28:48.684 ************************************ 00:28:48.684 17:25:25 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:28:48.684 17:25:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:48.684 17:25:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.684 17:25:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:48.684 ************************************ 00:28:48.684 START TEST bdev_nbd 00:28:48.684 ************************************ 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61502 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61502 /var/tmp/spdk-nbd.sock 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61502 ']' 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.684 17:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:48.684 [2024-11-26 17:25:25.960632] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:48.684 [2024-11-26 17:25:25.960784] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.942 [2024-11-26 17:25:26.142570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.942 [2024-11-26 17:25:26.296718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:49.878 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:50.137 1+0 records in 00:28:50.137 1+0 records out 00:28:50.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597148 s, 6.9 MB/s 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:50.137 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:50.396 1+0 records in 00:28:50.396 1+0 records out 00:28:50.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896748 s, 4.6 MB/s 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:50.396 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:28:50.654 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:28:50.654 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:28:50.654 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:28:50.654 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:50.655 1+0 records in 00:28:50.655 1+0 records out 00:28:50.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785769 s, 5.2 MB/s 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:50.655 17:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:50.913 1+0 records in 00:28:50.913 1+0 records out 00:28:50.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000915828 s, 4.5 MB/s 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:50.913 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:51.171 1+0 records in 00:28:51.171 1+0 records out 00:28:51.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736604 s, 5.6 MB/s 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:51.171 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:51.428 1+0 records in 00:28:51.428 1+0 records out 00:28:51.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736563 s, 5.6 MB/s 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:51.428 17:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd0", 00:28:51.686 "bdev_name": "Nvme0n1" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd1", 00:28:51.686 "bdev_name": "Nvme1n1" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd2", 00:28:51.686 "bdev_name": "Nvme2n1" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd3", 00:28:51.686 "bdev_name": "Nvme2n2" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd4", 00:28:51.686 "bdev_name": "Nvme2n3" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd5", 00:28:51.686 "bdev_name": "Nvme3n1" 00:28:51.686 } 00:28:51.686 ]' 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd0", 00:28:51.686 "bdev_name": "Nvme0n1" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd1", 00:28:51.686 "bdev_name": "Nvme1n1" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd2", 00:28:51.686 "bdev_name": "Nvme2n1" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd3", 00:28:51.686 "bdev_name": "Nvme2n2" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd4", 00:28:51.686 "bdev_name": "Nvme2n3" 00:28:51.686 }, 00:28:51.686 { 00:28:51.686 "nbd_device": "/dev/nbd5", 00:28:51.686 "bdev_name": "Nvme3n1" 00:28:51.686 } 00:28:51.686 ]' 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:51.686 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:51.944 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:52.202 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:52.461 17:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:52.720 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:52.979 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:53.238 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:53.498 17:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:28:53.759 /dev/nbd0 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:53.759 1+0 records in 00:28:53.759 1+0 records out 00:28:53.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838882 s, 4.9 MB/s 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:53.759 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:28:54.328 /dev/nbd1 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:54.328 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.329 1+0 records in 00:28:54.329 1+0 records out 00:28:54.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655238 s, 6.3 MB/s 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:28:54.329 /dev/nbd10 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:54.329 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.588 1+0 records in 00:28:54.588 1+0 records out 00:28:54.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608061 s, 6.7 MB/s 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:54.588 17:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:28:54.848 /dev/nbd11 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.848 1+0 records in 00:28:54.848 1+0 records out 00:28:54.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782894 s, 5.2 MB/s 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:54.848 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:28:55.107 /dev/nbd12 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:55.107 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:55.107 1+0 records in 00:28:55.107 1+0 records out 00:28:55.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768155 s, 5.3 MB/s 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:55.108 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:28:55.367 /dev/nbd13 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:55.367 1+0 records in 00:28:55.367 1+0 records out 00:28:55.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731163 s, 5.6 MB/s 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:55.367 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:55.626 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd0", 00:28:55.626 "bdev_name": "Nvme0n1" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd1", 00:28:55.626 "bdev_name": "Nvme1n1" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd10", 00:28:55.626 "bdev_name": "Nvme2n1" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd11", 00:28:55.626 "bdev_name": "Nvme2n2" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd12", 00:28:55.626 "bdev_name": "Nvme2n3" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd13", 00:28:55.626 "bdev_name": "Nvme3n1" 00:28:55.626 } 00:28:55.626 ]' 00:28:55.626 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd0", 00:28:55.626 "bdev_name": "Nvme0n1" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd1", 00:28:55.626 "bdev_name": "Nvme1n1" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd10", 00:28:55.626 "bdev_name": "Nvme2n1" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd11", 00:28:55.626 "bdev_name": "Nvme2n2" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd12", 00:28:55.626 "bdev_name": "Nvme2n3" 00:28:55.626 }, 00:28:55.626 { 00:28:55.626 "nbd_device": "/dev/nbd13", 00:28:55.626 "bdev_name": "Nvme3n1" 00:28:55.626 } 00:28:55.626 ]' 00:28:55.626 17:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:55.626 /dev/nbd1 00:28:55.626 /dev/nbd10 00:28:55.626 /dev/nbd11 00:28:55.626 /dev/nbd12 00:28:55.626 /dev/nbd13' 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:55.626 /dev/nbd1 00:28:55.626 /dev/nbd10 00:28:55.626 /dev/nbd11 00:28:55.626 /dev/nbd12 00:28:55.626 /dev/nbd13' 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:55.626 256+0 records in 00:28:55.626 256+0 records out 00:28:55.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00737168 s, 142 MB/s 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:55.626 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:55.885 256+0 records in 00:28:55.885 256+0 records out 00:28:55.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0992439 s, 10.6 MB/s 00:28:55.885 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:55.885 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:55.885 256+0 records in 00:28:55.885 256+0 records out 00:28:55.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101169 s, 10.4 MB/s 00:28:55.885 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:55.885 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:28:56.144 256+0 records in 00:28:56.144 256+0 records out 00:28:56.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112554 s, 9.3 MB/s 00:28:56.144 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:56.144 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:28:56.144 256+0 records in 00:28:56.144 256+0 records out 00:28:56.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108344 s, 9.7 MB/s 00:28:56.144 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:56.144 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:28:56.144 256+0 records in 00:28:56.144 256+0 records out 00:28:56.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108859 s, 9.6 MB/s 00:28:56.144 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:56.144 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:28:56.427 256+0 records in 00:28:56.427 256+0 records out 00:28:56.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105371 s, 10.0 MB/s 00:28:56.427 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:56.428 17:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:56.692 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:56.952 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.212 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.471 17:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.730 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:57.990 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:58.250 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:58.509 malloc_lvol_verify 00:28:58.509 17:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:58.768 6f30f15f-1b8e-41d5-a84c-b23205562663 00:28:58.768 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:59.028 e30b7a67-febf-404c-ae52-e762a851963a 00:28:59.028 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:59.287 /dev/nbd0 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:59.287 mke2fs 1.47.0 (5-Feb-2023) 00:28:59.287 Discarding device blocks: 0/4096 done 00:28:59.287 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:59.287 00:28:59.287 Allocating group tables: 0/1 done 00:28:59.287 Writing inode tables: 0/1 done 00:28:59.287 Creating journal (1024 blocks): done 00:28:59.287 Writing superblocks and filesystem accounting information: 0/1 done 00:28:59.287 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:59.287 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61502 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61502 ']' 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61502 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61502 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.547 killing process with pid 61502 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61502' 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61502 00:28:59.547 17:25:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61502 00:29:00.925 17:25:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:00.925 00:29:00.925 real 0m12.469s 00:29:00.925 user 0m16.866s 00:29:00.925 sys 0m4.660s 00:29:00.925 17:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.925 17:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:00.925 ************************************ 00:29:00.925 END TEST bdev_nbd 00:29:00.925 ************************************ 00:29:01.184 17:25:38 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:29:01.184 skipping fio tests on NVMe due to multi-ns failures. 00:29:01.184 17:25:38 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:29:01.184 17:25:38 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:01.184 17:25:38 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:01.184 17:25:38 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:01.184 17:25:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:01.184 17:25:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.184 17:25:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.184 ************************************ 00:29:01.184 START TEST bdev_verify 00:29:01.184 ************************************ 00:29:01.184 17:25:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:01.184 [2024-11-26 17:25:38.506123] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:01.184 [2024-11-26 17:25:38.506277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61906 ] 00:29:01.443 [2024-11-26 17:25:38.686921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.443 [2024-11-26 17:25:38.831370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.443 [2024-11-26 17:25:38.831404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.381 Running I/O for 5 seconds... 00:29:04.699 16640.00 IOPS, 65.00 MiB/s [2024-11-26T17:25:43.083Z] 16896.00 IOPS, 66.00 MiB/s [2024-11-26T17:25:44.020Z] 17664.00 IOPS, 69.00 MiB/s [2024-11-26T17:25:44.969Z] 18192.00 IOPS, 71.06 MiB/s [2024-11-26T17:25:44.969Z] 18560.00 IOPS, 72.50 MiB/s 00:29:07.523 Latency(us) 00:29:07.523 [2024-11-26T17:25:44.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.523 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x0 length 0xbd0bd 00:29:07.523 Nvme0n1 : 5.06 1517.20 5.93 0.00 0.00 84136.90 17399.95 80131.35 00:29:07.523 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:29:07.523 Nvme0n1 : 5.06 1541.94 6.02 0.00 0.00 82845.66 13278.91 76926.10 00:29:07.523 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x0 length 0xa0000 00:29:07.523 Nvme1n1 : 5.06 1516.77 5.92 0.00 0.00 83991.00 17628.90 74636.63 00:29:07.523 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0xa0000 length 0xa0000 00:29:07.523 Nvme1n1 : 5.07 1541.18 6.02 0.00 0.00 82747.82 14194.70 70973.48 00:29:07.523 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x0 length 0x80000 00:29:07.523 Nvme2n1 : 5.07 1516.01 5.92 0.00 0.00 83863.83 18086.79 72347.17 00:29:07.523 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x80000 length 0x80000 00:29:07.523 Nvme2n1 : 5.07 1540.43 6.02 0.00 0.00 82650.43 14767.06 69141.91 00:29:07.523 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x0 length 0x80000 00:29:07.523 Nvme2n2 : 5.07 1515.29 5.92 0.00 0.00 83745.10 19346.00 69141.91 00:29:07.523 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x80000 length 0x80000 00:29:07.523 Nvme2n2 : 5.07 1539.70 6.01 0.00 0.00 82537.70 15911.80 70973.48 00:29:07.523 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x0 length 0x80000 00:29:07.523 Nvme2n3 : 5.07 1514.57 5.92 0.00 0.00 83593.08 17857.84 72347.17 00:29:07.523 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x80000 length 0x80000 00:29:07.523 Nvme2n3 : 5.07 1538.96 6.01 0.00 0.00 82430.28 15911.80 77383.99 00:29:07.523 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x0 length 0x20000 00:29:07.523 Nvme3n1 : 5.07 1513.85 5.91 0.00 0.00 83509.71 16827.58 77383.99 00:29:07.523 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:07.523 Verification LBA range: start 0x20000 length 0x20000 00:29:07.523 Nvme3n1 : 5.08 1538.28 6.01 0.00 0.00 82334.84 12305.89 79673.46 00:29:07.523 [2024-11-26T17:25:44.969Z] =================================================================================================================== 00:29:07.523 [2024-11-26T17:25:44.969Z] Total : 18334.18 71.62 0.00 0.00 83193.84 12305.89 80131.35 00:29:09.430 00:29:09.430 real 0m8.012s 00:29:09.430 user 0m14.770s 00:29:09.430 sys 0m0.334s 00:29:09.430 17:25:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.430 17:25:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:09.430 ************************************ 00:29:09.430 END TEST bdev_verify 00:29:09.430 ************************************ 00:29:09.430 17:25:46 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:09.430 17:25:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:09.430 17:25:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.430 17:25:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:09.430 ************************************ 00:29:09.430 START TEST bdev_verify_big_io 00:29:09.430 ************************************ 00:29:09.430 17:25:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:09.430 [2024-11-26 17:25:46.572144] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:09.430 [2024-11-26 17:25:46.572389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62005 ] 00:29:09.430 [2024-11-26 17:25:46.758760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:09.687 [2024-11-26 17:25:46.890566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.687 [2024-11-26 17:25:46.890599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.623 Running I/O for 5 seconds... 00:29:15.106 1666.00 IOPS, 104.12 MiB/s [2024-11-26T17:25:53.490Z] 2488.50 IOPS, 155.53 MiB/s [2024-11-26T17:25:53.750Z] 2880.67 IOPS, 180.04 MiB/s [2024-11-26T17:25:54.009Z] 2790.00 IOPS, 174.38 MiB/s 00:29:16.563 Latency(us) 00:29:16.563 [2024-11-26T17:25:54.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.563 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x0 length 0xbd0b 00:29:16.563 Nvme0n1 : 5.60 120.05 7.50 0.00 0.00 1031611.12 14767.06 1304080.54 00:29:16.563 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0xbd0b length 0xbd0b 00:29:16.563 Nvme0n1 : 5.43 188.59 11.79 0.00 0.00 663360.95 31365.70 611745.65 00:29:16.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x0 length 0xa000 00:29:16.563 Nvme1n1 : 5.62 121.44 7.59 0.00 0.00 975011.36 24039.41 1443280.15 00:29:16.563 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0xa000 length 0xa000 00:29:16.563 Nvme1n1 : 5.55 184.50 11.53 0.00 0.00 653613.16 60441.94 703324.34 00:29:16.563 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x0 length 0x8000 00:29:16.563 Nvme2n1 : 5.72 121.59 7.60 0.00 0.00 929855.92 45560.40 1853552.68 00:29:16.563 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x8000 length 0x8000 00:29:16.563 Nvme2n1 : 5.55 190.00 11.87 0.00 0.00 632279.67 100278.67 710650.63 00:29:16.563 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x0 length 0x8000 00:29:16.563 Nvme2n2 : 5.79 143.05 8.94 0.00 0.00 772532.82 18086.79 1545848.29 00:29:16.563 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x8000 length 0x8000 00:29:16.563 Nvme2n2 : 5.55 187.60 11.72 0.00 0.00 625818.62 61357.72 710650.63 00:29:16.563 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x0 length 0x8000 00:29:16.563 Nvme2n3 : 5.95 176.59 11.04 0.00 0.00 600503.84 9730.24 2124625.61 00:29:16.563 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x8000 length 0x8000 00:29:16.563 Nvme2n3 : 5.58 203.60 12.73 0.00 0.00 577731.74 9558.53 721640.08 00:29:16.563 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x0 length 0x2000 00:29:16.563 Nvme3n1 : 6.15 275.02 17.19 0.00 0.00 373333.35 286.18 2153930.79 00:29:16.563 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:16.563 Verification LBA range: start 0x2000 length 0x2000 00:29:16.563 Nvme3n1 : 5.58 202.83 12.68 0.00 0.00 569283.31 9787.47 714313.78 00:29:16.563 [2024-11-26T17:25:54.009Z] =================================================================================================================== 00:29:16.563 [2024-11-26T17:25:54.009Z] Total : 2114.87 132.18 0.00 0.00 654532.55 286.18 2153930.79 00:29:19.857 00:29:19.857 real 0m10.482s 00:29:19.857 user 0m19.667s 00:29:19.857 sys 0m0.343s 00:29:19.857 17:25:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.857 17:25:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.857 ************************************ 00:29:19.857 END TEST bdev_verify_big_io 00:29:19.857 ************************************ 00:29:19.857 17:25:57 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:19.857 17:25:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:19.857 17:25:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.857 17:25:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:19.857 ************************************ 00:29:19.857 START TEST bdev_write_zeroes 00:29:19.857 ************************************ 00:29:19.857 17:25:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:19.857 [2024-11-26 17:25:57.130872] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:19.857 [2024-11-26 17:25:57.131036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62136 ] 00:29:20.117 [2024-11-26 17:25:57.315989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.117 [2024-11-26 17:25:57.449935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.066 Running I/O for 1 seconds... 00:29:22.001 57216.00 IOPS, 223.50 MiB/s 00:29:22.001 Latency(us) 00:29:22.001 [2024-11-26T17:25:59.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.001 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:22.001 Nvme0n1 : 1.02 9512.89 37.16 0.00 0.00 13429.28 9844.71 29992.02 00:29:22.001 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:22.001 Nvme1n1 : 1.02 9502.58 37.12 0.00 0.00 13427.46 10188.13 29992.02 00:29:22.001 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:22.001 Nvme2n1 : 1.02 9491.87 37.08 0.00 0.00 13347.91 9844.71 25756.51 00:29:22.001 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:22.001 Nvme2n2 : 1.03 9481.44 37.04 0.00 0.00 13313.39 9844.71 23581.51 00:29:22.001 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:22.001 Nvme2n3 : 1.03 9471.50 37.00 0.00 0.00 13286.00 9386.82 21406.52 00:29:22.001 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:22.001 Nvme3n1 : 1.03 9461.04 36.96 0.00 0.00 13262.20 8013.14 22551.25 00:29:22.001 [2024-11-26T17:25:59.447Z] =================================================================================================================== 00:29:22.001 [2024-11-26T17:25:59.447Z] Total : 56921.33 222.35 0.00 0.00 13344.37 8013.14 29992.02 00:29:23.396 00:29:23.396 real 0m3.473s 00:29:23.396 user 0m3.083s 00:29:23.396 sys 0m0.270s 00:29:23.396 ************************************ 00:29:23.396 END TEST bdev_write_zeroes 00:29:23.396 ************************************ 00:29:23.396 17:26:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.396 17:26:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:23.396 17:26:00 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:23.396 17:26:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:23.396 17:26:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.396 17:26:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:23.396 ************************************ 00:29:23.396 START TEST bdev_json_nonenclosed 00:29:23.396 ************************************ 00:29:23.396 17:26:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:23.396 [2024-11-26 17:26:00.656509] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:23.396 [2024-11-26 17:26:00.656665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:29:23.396 [2024-11-26 17:26:00.832603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.656 [2024-11-26 17:26:00.956872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.656 [2024-11-26 17:26:00.957069] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:23.656 [2024-11-26 17:26:00.957095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:23.656 [2024-11-26 17:26:00.957106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:23.915 00:29:23.915 real 0m0.674s 00:29:23.915 user 0m0.437s 00:29:23.915 sys 0m0.130s 00:29:23.915 17:26:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.915 ************************************ 00:29:23.916 END TEST bdev_json_nonenclosed 00:29:23.916 ************************************ 00:29:23.916 17:26:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:23.916 17:26:01 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:23.916 17:26:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:23.916 17:26:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.916 17:26:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:23.916 ************************************ 00:29:23.916 START TEST bdev_json_nonarray 00:29:23.916 ************************************ 00:29:23.916 17:26:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:24.175 [2024-11-26 17:26:01.399861] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:24.175 [2024-11-26 17:26:01.399981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62220 ] 00:29:24.175 [2024-11-26 17:26:01.582211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.435 [2024-11-26 17:26:01.711402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.435 [2024-11-26 17:26:01.711604] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:24.435 [2024-11-26 17:26:01.711642] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:24.435 [2024-11-26 17:26:01.711653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:24.694 00:29:24.694 real 0m0.701s 00:29:24.694 user 0m0.451s 00:29:24.694 sys 0m0.144s 00:29:24.694 17:26:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.694 ************************************ 00:29:24.694 END TEST bdev_json_nonarray 00:29:24.694 ************************************ 00:29:24.694 17:26:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:24.694 17:26:02 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:24.694 00:29:24.694 real 0m47.465s 00:29:24.694 user 1m11.454s 00:29:24.694 sys 0m7.898s 00:29:24.694 17:26:02 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.694 17:26:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:24.694 ************************************ 00:29:24.694 END TEST blockdev_nvme 00:29:24.694 ************************************ 00:29:24.694 17:26:02 -- spdk/autotest.sh@209 -- # uname -s 00:29:24.694 17:26:02 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:29:24.694 17:26:02 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:24.694 17:26:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:24.694 17:26:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.694 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:29:24.955 ************************************ 00:29:24.955 START TEST blockdev_nvme_gpt 00:29:24.955 ************************************ 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:24.955 * Looking for test storage... 00:29:24.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.955 17:26:02 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.955 --rc genhtml_branch_coverage=1 00:29:24.955 --rc genhtml_function_coverage=1 00:29:24.955 --rc genhtml_legend=1 00:29:24.955 --rc geninfo_all_blocks=1 00:29:24.955 --rc geninfo_unexecuted_blocks=1 00:29:24.955 00:29:24.955 ' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.955 --rc genhtml_branch_coverage=1 00:29:24.955 --rc genhtml_function_coverage=1 00:29:24.955 --rc genhtml_legend=1 00:29:24.955 --rc geninfo_all_blocks=1 00:29:24.955 --rc geninfo_unexecuted_blocks=1 00:29:24.955 00:29:24.955 ' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.955 --rc genhtml_branch_coverage=1 00:29:24.955 --rc genhtml_function_coverage=1 00:29:24.955 --rc genhtml_legend=1 00:29:24.955 --rc geninfo_all_blocks=1 00:29:24.955 --rc geninfo_unexecuted_blocks=1 00:29:24.955 00:29:24.955 ' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.955 --rc genhtml_branch_coverage=1 00:29:24.955 --rc genhtml_function_coverage=1 00:29:24.955 --rc genhtml_legend=1 00:29:24.955 --rc geninfo_all_blocks=1 00:29:24.955 --rc geninfo_unexecuted_blocks=1 00:29:24.955 00:29:24.955 ' 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:29:24.955 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62304 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:25.215 17:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62304 00:29:25.215 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62304 ']' 00:29:25.215 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.215 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.215 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.215 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.215 17:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:25.215 [2024-11-26 17:26:02.520756] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:25.215 [2024-11-26 17:26:02.521039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62304 ] 00:29:25.475 [2024-11-26 17:26:02.708679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.475 [2024-11-26 17:26:02.845272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.415 17:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.415 17:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:29:26.415 17:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:29:26.415 17:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:29:26.415 17:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:26.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.245 Waiting for block devices as requested 00:29:27.504 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.504 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.504 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.766 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.048 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:33.048 BYT; 00:29:33.048 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:33.048 BYT; 00:29:33.048 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:33.048 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:33.048 17:26:10 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:33.049 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:33.049 17:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:33.986 The operation has completed successfully. 00:29:33.986 17:26:11 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:34.923 The operation has completed successfully. 00:29:34.923 17:26:12 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:35.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:36.446 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:36.446 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:36.446 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:36.446 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:36.446 17:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:29:36.446 17:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.446 17:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:36.446 [] 00:29:36.446 17:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.446 17:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:29:36.446 17:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:29:36.446 17:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:36.446 17:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:36.446 17:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:36.446 17:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.446 17:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.013 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.013 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:29:37.013 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.013 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.013 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.013 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "704b1553-d231-4708-b14f-2af11a4c3d5c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "704b1553-d231-4708-b14f-2af11a4c3d5c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0ee4b407-f18d-46e2-b1b0-29aeb76896dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0ee4b407-f18d-46e2-b1b0-29aeb76896dd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2ad09a4c-bea6-4366-8266-624e2e94cd6b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2ad09a4c-bea6-4366-8266-624e2e94cd6b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "56bbb7bf-9738-4f2d-9311-e6946350ab36"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56bbb7bf-9738-4f2d-9311-e6946350ab36",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6a46ae80-83a0-47b4-8319-a242be5cb7ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6a46ae80-83a0-47b4-8319-a242be5cb7ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:29:37.014 17:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62304 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62304 ']' 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62304 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62304 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62304' 00:29:37.014 killing process with pid 62304 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62304 00:29:37.014 17:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62304 00:29:40.300 17:26:17 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:40.300 17:26:17 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:40.300 17:26:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:40.300 17:26:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.300 17:26:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:40.300 ************************************ 00:29:40.300 START TEST bdev_hello_world 00:29:40.300 ************************************ 00:29:40.300 17:26:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:40.300 [2024-11-26 17:26:17.460703] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:40.300 [2024-11-26 17:26:17.461009] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62959 ] 00:29:40.300 [2024-11-26 17:26:17.642473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.558 [2024-11-26 17:26:17.792838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.127 [2024-11-26 17:26:18.555522] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:41.127 [2024-11-26 17:26:18.555727] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:41.127 [2024-11-26 17:26:18.555777] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:41.127 [2024-11-26 17:26:18.559155] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:41.127 [2024-11-26 17:26:18.559610] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:41.127 [2024-11-26 17:26:18.559655] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:41.127 [2024-11-26 17:26:18.559859] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:41.127 00:29:41.127 [2024-11-26 17:26:18.559884] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:42.503 00:29:42.503 real 0m2.560s 00:29:42.503 user 0m2.101s 00:29:42.503 sys 0m0.349s 00:29:42.503 17:26:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.503 ************************************ 00:29:42.503 END TEST bdev_hello_world 00:29:42.503 ************************************ 00:29:42.503 17:26:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:42.761 17:26:19 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:29:42.762 17:26:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:42.762 17:26:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.762 17:26:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:42.762 ************************************ 00:29:42.762 START TEST bdev_bounds 00:29:42.762 ************************************ 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63007 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63007' 00:29:42.762 Process bdevio pid: 63007 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63007 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63007 ']' 00:29:42.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.762 17:26:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:42.762 [2024-11-26 17:26:20.093739] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:42.762 [2024-11-26 17:26:20.093980] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63007 ] 00:29:43.021 [2024-11-26 17:26:20.278902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.021 [2024-11-26 17:26:20.440145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.021 [2024-11-26 17:26:20.440278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.021 [2024-11-26 17:26:20.440326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.962 17:26:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.962 17:26:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:29:43.962 17:26:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:43.962 I/O targets: 00:29:43.962 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:43.962 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:43.962 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:43.962 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:43.962 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:43.962 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:43.962 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:43.962 00:29:43.962 00:29:43.962 CUnit - A unit testing framework for C - Version 2.1-3 00:29:43.962 http://cunit.sourceforge.net/ 00:29:43.962 00:29:43.962 00:29:43.962 Suite: bdevio tests on: Nvme3n1 00:29:43.962 Test: blockdev write read block ...passed 00:29:43.962 Test: blockdev write zeroes read block ...passed 00:29:43.962 Test: blockdev write zeroes read no split ...passed 00:29:43.962 Test: blockdev write zeroes read split ...passed 00:29:44.223 Test: blockdev write zeroes read split partial ...passed 00:29:44.223 Test: blockdev reset ...[2024-11-26 17:26:21.429361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:44.223 [2024-11-26 17:26:21.433925] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:44.223 passed 00:29:44.223 Test: blockdev write read 8 blocks ...passed 00:29:44.223 Test: blockdev write read size > 128k ...passed 00:29:44.223 Test: blockdev write read invalid size ...passed 00:29:44.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.223 Test: blockdev write read max offset ...passed 00:29:44.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.223 Test: blockdev writev readv 8 blocks ...passed 00:29:44.223 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.223 Test: blockdev writev readv block ...passed 00:29:44.223 Test: blockdev writev readv size > 128k ...passed 00:29:44.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.223 Test: blockdev comparev and writev ...[2024-11-26 17:26:21.444708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae604000 len:0x1000 00:29:44.223 [2024-11-26 17:26:21.444883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:44.223 passed 00:29:44.223 Test: blockdev nvme passthru rw ...passed 00:29:44.223 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:26:21.446129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:44.223 [2024-11-26 17:26:21.446241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:44.223 passed 00:29:44.223 Test: blockdev nvme admin passthru ...passed 00:29:44.223 Test: blockdev copy ...passed 00:29:44.223 Suite: bdevio tests on: Nvme2n3 00:29:44.223 Test: blockdev write read block ...passed 00:29:44.223 Test: blockdev write zeroes read block ...passed 00:29:44.223 Test: blockdev write zeroes read no split ...passed 00:29:44.223 Test: blockdev write zeroes read split ...passed 00:29:44.223 Test: blockdev write zeroes read split partial ...passed 00:29:44.223 Test: blockdev reset ...[2024-11-26 17:26:21.541518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:44.223 [2024-11-26 17:26:21.546435] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:44.223 passed 00:29:44.223 Test: blockdev write read 8 blocks ...passed 00:29:44.223 Test: blockdev write read size > 128k ...passed 00:29:44.223 Test: blockdev write read invalid size ...passed 00:29:44.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.223 Test: blockdev write read max offset ...passed 00:29:44.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.223 Test: blockdev writev readv 8 blocks ...passed 00:29:44.223 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.223 Test: blockdev writev readv block ...passed 00:29:44.223 Test: blockdev writev readv size > 128k ...passed 00:29:44.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.223 Test: blockdev comparev and writev ...[2024-11-26 17:26:21.555768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae602000 len:0x1000 00:29:44.223 [2024-11-26 17:26:21.555933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:44.223 passed 00:29:44.223 Test: blockdev nvme passthru rw ...passed 00:29:44.223 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:26:21.556957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:44.223 [2024-11-26 17:26:21.557068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:44.223 passed 00:29:44.223 Test: blockdev nvme admin passthru ...passed 00:29:44.223 Test: blockdev copy ...passed 00:29:44.223 Suite: bdevio tests on: Nvme2n2 00:29:44.223 Test: blockdev write read block ...passed 00:29:44.223 Test: blockdev write zeroes read block ...passed 00:29:44.223 Test: blockdev write zeroes read no split ...passed 00:29:44.223 Test: blockdev write zeroes read split ...passed 00:29:44.223 Test: blockdev write zeroes read split partial ...passed 00:29:44.223 Test: blockdev reset ...[2024-11-26 17:26:21.646206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:44.223 [2024-11-26 17:26:21.651053] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:29:44.223 00:29:44.223 Test: blockdev write read 8 blocks ...passed 00:29:44.223 Test: blockdev write read size > 128k ...passed 00:29:44.223 Test: blockdev write read invalid size ...passed 00:29:44.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.223 Test: blockdev write read max offset ...passed 00:29:44.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.223 Test: blockdev writev readv 8 blocks ...passed 00:29:44.223 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.223 Test: blockdev writev readv block ...passed 00:29:44.223 Test: blockdev writev readv size > 128k ...passed 00:29:44.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.223 Test: blockdev comparev and writev ...[2024-11-26 17:26:21.661377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2438000 len:0x1000 00:29:44.223 [2024-11-26 17:26:21.661546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:44.223 passed 00:29:44.223 Test: blockdev nvme passthru rw ...passed 00:29:44.223 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:26:21.662629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:44.223 [2024-11-26 17:26:21.662725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:44.223 passed 00:29:44.223 Test: blockdev nvme admin passthru ...passed 00:29:44.223 Test: blockdev copy ...passed 00:29:44.223 Suite: bdevio tests on: Nvme2n1 00:29:44.484 Test: blockdev write read block ...passed 00:29:44.484 Test: blockdev write zeroes read block ...passed 00:29:44.484 Test: blockdev write zeroes read no split ...passed 00:29:44.484 Test: blockdev write zeroes read split ...passed 00:29:44.484 Test: blockdev write zeroes read split partial ...passed 00:29:44.484 Test: blockdev reset ...[2024-11-26 17:26:21.761833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:44.484 [2024-11-26 17:26:21.766963] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:44.484 passed 00:29:44.484 Test: blockdev write read 8 blocks ...passed 00:29:44.484 Test: blockdev write read size > 128k ...passed 00:29:44.484 Test: blockdev write read invalid size ...passed 00:29:44.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.484 Test: blockdev write read max offset ...passed 00:29:44.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.484 Test: blockdev writev readv 8 blocks ...passed 00:29:44.484 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.484 Test: blockdev writev readv block ...passed 00:29:44.484 Test: blockdev writev readv size > 128k ...passed 00:29:44.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.484 Test: blockdev comparev and writev ...[2024-11-26 17:26:21.778188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2434000 len:0x1000 00:29:44.484 [2024-11-26 17:26:21.778356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:44.484 passed 00:29:44.484 Test: blockdev nvme passthru rw ...passed 00:29:44.484 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:26:21.779411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:44.484 [2024-11-26 17:26:21.779535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:44.484 passed 00:29:44.484 Test: blockdev nvme admin passthru ...passed 00:29:44.484 Test: blockdev copy ...passed 00:29:44.484 Suite: bdevio tests on: Nvme1n1p2 00:29:44.484 Test: blockdev write read block ...passed 00:29:44.484 Test: blockdev write zeroes read block ...passed 00:29:44.484 Test: blockdev write zeroes read no split ...passed 00:29:44.484 Test: blockdev write zeroes read split ...passed 00:29:44.484 Test: blockdev write zeroes read split partial ...passed 00:29:44.484 Test: blockdev reset ...[2024-11-26 17:26:21.867770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:44.484 [2024-11-26 17:26:21.872440] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:44.484 passed 00:29:44.484 Test: blockdev write read 8 blocks ...passed 00:29:44.484 Test: blockdev write read size > 128k ...passed 00:29:44.484 Test: blockdev write read invalid size ...passed 00:29:44.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.484 Test: blockdev write read max offset ...passed 00:29:44.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.484 Test: blockdev writev readv 8 blocks ...passed 00:29:44.484 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.484 Test: blockdev writev readv block ...passed 00:29:44.484 Test: blockdev writev readv size > 128k ...passed 00:29:44.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.484 Test: blockdev comparev and writev ...[2024-11-26 17:26:21.881668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c2430000 len:0x1000 00:29:44.484 [2024-11-26 17:26:21.881743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:44.484 passed 00:29:44.484 Test: blockdev nvme passthru rw ...passed 00:29:44.484 Test: blockdev nvme passthru vendor specific ...passed 00:29:44.484 Test: blockdev nvme admin passthru ...passed 00:29:44.484 Test: blockdev copy ...passed 00:29:44.484 Suite: bdevio tests on: Nvme1n1p1 00:29:44.484 Test: blockdev write read block ...passed 00:29:44.484 Test: blockdev write zeroes read block ...passed 00:29:44.484 Test: blockdev write zeroes read no split ...passed 00:29:44.484 Test: blockdev write zeroes read split ...passed 00:29:44.744 Test: blockdev write zeroes read split partial ...passed 00:29:44.744 Test: blockdev reset ...[2024-11-26 17:26:21.987060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:44.744 [2024-11-26 17:26:21.991814] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:44.744 passed 00:29:44.744 Test: blockdev write read 8 blocks ...passed 00:29:44.744 Test: blockdev write read size > 128k ...passed 00:29:44.744 Test: blockdev write read invalid size ...passed 00:29:44.744 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.744 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.744 Test: blockdev write read max offset ...passed 00:29:44.744 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.744 Test: blockdev writev readv 8 blocks ...passed 00:29:44.744 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.744 Test: blockdev writev readv block ...passed 00:29:44.744 Test: blockdev writev readv size > 128k ...passed 00:29:44.744 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.744 Test: blockdev comparev and writev ...[2024-11-26 17:26:22.000872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ae80e000 len:0x1000 00:29:44.744 [2024-11-26 17:26:22.000949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:44.744 passed 00:29:44.744 Test: blockdev nvme passthru rw ...passed 00:29:44.744 Test: blockdev nvme passthru vendor specific ...passed 00:29:44.744 Test: blockdev nvme admin passthru ...passed 00:29:44.744 Test: blockdev copy ...passed 00:29:44.744 Suite: bdevio tests on: Nvme0n1 00:29:44.744 Test: blockdev write read block ...passed 00:29:44.744 Test: blockdev write zeroes read block ...passed 00:29:44.744 Test: blockdev write zeroes read no split ...passed 00:29:44.744 Test: blockdev write zeroes read split ...passed 00:29:44.744 Test: blockdev write zeroes read split partial ...passed 00:29:44.744 Test: blockdev reset ...[2024-11-26 17:26:22.085692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:44.744 [2024-11-26 17:26:22.090309] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:29:44.744 passed 00:29:44.744 Test: blockdev write read 8 blocks ...passed 00:29:44.744 Test: blockdev write read size > 128k ...passed 00:29:44.744 Test: blockdev write read invalid size ...passed 00:29:44.744 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.744 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.744 Test: blockdev write read max offset ...passed 00:29:44.744 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:44.744 Test: blockdev writev readv 8 blocks ...passed 00:29:44.744 Test: blockdev writev readv 30 x 1block ...passed 00:29:44.744 Test: blockdev writev readv block ...passed 00:29:44.744 Test: blockdev writev readv size > 128k ...passed 00:29:44.744 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:44.744 Test: blockdev comparev and writev ...passed 00:29:44.744 Test: blockdev nvme passthru rw ...[2024-11-26 17:26:22.099268] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:44.744 separate metadata which is not supported yet. 00:29:44.744 passed 00:29:44.744 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:26:22.099895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:29:44.744 Test: blockdev nvme admin passthru ...RP2 0x0 00:29:44.744 [2024-11-26 17:26:22.100060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:44.744 passed 00:29:44.744 Test: blockdev copy ...passed 00:29:44.744 00:29:44.744 Run Summary: Type Total Ran Passed Failed Inactive 00:29:44.744 suites 7 7 n/a 0 0 00:29:44.744 tests 161 161 161 0 0 00:29:44.744 asserts 1025 1025 1025 0 n/a 00:29:44.744 00:29:44.744 Elapsed time = 2.090 seconds 00:29:44.744 0 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63007 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63007 ']' 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63007 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63007 00:29:44.744 killing process with pid 63007 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63007' 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63007 00:29:44.744 17:26:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63007 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:46.125 00:29:46.125 real 0m3.462s 00:29:46.125 user 0m8.863s 00:29:46.125 sys 0m0.549s 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:46.125 ************************************ 00:29:46.125 END TEST bdev_bounds 00:29:46.125 ************************************ 00:29:46.125 17:26:23 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:46.125 17:26:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:46.125 17:26:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.125 17:26:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:46.125 ************************************ 00:29:46.125 START TEST bdev_nbd 00:29:46.125 ************************************ 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:29:46.125 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63072 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63072 /var/tmp/spdk-nbd.sock 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63072 ']' 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:46.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.126 17:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:46.386 [2024-11-26 17:26:23.651761] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:46.386 [2024-11-26 17:26:23.652156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.646 [2024-11-26 17:26:23.833842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.646 [2024-11-26 17:26:23.983328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:47.584 17:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.843 1+0 records in 00:29:47.843 1+0 records out 00:29:47.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768484 s, 5.3 MB/s 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:47.843 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.103 1+0 records in 00:29:48.103 1+0 records out 00:29:48.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722909 s, 5.7 MB/s 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.103 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:29:48.361 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.362 1+0 records in 00:29:48.362 1+0 records out 00:29:48.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881184 s, 4.6 MB/s 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.362 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.621 1+0 records in 00:29:48.621 1+0 records out 00:29:48.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646507 s, 6.3 MB/s 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.621 17:26:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:48.879 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.880 1+0 records in 00:29:48.880 1+0 records out 00:29:48.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504938 s, 8.1 MB/s 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.880 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.138 1+0 records in 00:29:49.138 1+0 records out 00:29:49.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882492 s, 4.6 MB/s 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:49.138 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:49.396 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.397 1+0 records in 00:29:49.397 1+0 records out 00:29:49.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593842 s, 6.9 MB/s 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:49.397 17:26:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd0", 00:29:49.656 "bdev_name": "Nvme0n1" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd1", 00:29:49.656 "bdev_name": "Nvme1n1p1" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd2", 00:29:49.656 "bdev_name": "Nvme1n1p2" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd3", 00:29:49.656 "bdev_name": "Nvme2n1" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd4", 00:29:49.656 "bdev_name": "Nvme2n2" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd5", 00:29:49.656 "bdev_name": "Nvme2n3" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd6", 00:29:49.656 "bdev_name": "Nvme3n1" 00:29:49.656 } 00:29:49.656 ]' 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd0", 00:29:49.656 "bdev_name": "Nvme0n1" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd1", 00:29:49.656 "bdev_name": "Nvme1n1p1" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd2", 00:29:49.656 "bdev_name": "Nvme1n1p2" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd3", 00:29:49.656 "bdev_name": "Nvme2n1" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd4", 00:29:49.656 "bdev_name": "Nvme2n2" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd5", 00:29:49.656 "bdev_name": "Nvme2n3" 00:29:49.656 }, 00:29:49.656 { 00:29:49.656 "nbd_device": "/dev/nbd6", 00:29:49.656 "bdev_name": "Nvme3n1" 00:29:49.656 } 00:29:49.656 ]' 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.656 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.915 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.174 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.434 17:26:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.693 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.953 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.212 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.471 17:26:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.037 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:52.295 /dev/nbd0 00:29:52.295 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:52.295 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:52.295 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.296 1+0 records in 00:29:52.296 1+0 records out 00:29:52.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559993 s, 7.3 MB/s 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.296 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:29:52.554 /dev/nbd1 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.554 1+0 records in 00:29:52.554 1+0 records out 00:29:52.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719919 s, 5.7 MB/s 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.554 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.555 17:26:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:29:52.814 /dev/nbd10 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.814 1+0 records in 00:29:52.814 1+0 records out 00:29:52.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665676 s, 6.2 MB/s 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.814 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:29:53.073 /dev/nbd11 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.073 1+0 records in 00:29:53.073 1+0 records out 00:29:53.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816749 s, 5.0 MB/s 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:53.073 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:29:53.333 /dev/nbd12 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.333 1+0 records in 00:29:53.333 1+0 records out 00:29:53.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806924 s, 5.1 MB/s 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:53.333 17:26:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:29:53.903 /dev/nbd13 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.903 1+0 records in 00:29:53.903 1+0 records out 00:29:53.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082071 s, 5.0 MB/s 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:53.903 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:29:54.163 /dev/nbd14 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:54.163 1+0 records in 00:29:54.163 1+0 records out 00:29:54.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733247 s, 5.6 MB/s 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:54.163 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:54.422 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd0", 00:29:54.422 "bdev_name": "Nvme0n1" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd1", 00:29:54.422 "bdev_name": "Nvme1n1p1" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd10", 00:29:54.422 "bdev_name": "Nvme1n1p2" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd11", 00:29:54.422 "bdev_name": "Nvme2n1" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd12", 00:29:54.422 "bdev_name": "Nvme2n2" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd13", 00:29:54.422 "bdev_name": "Nvme2n3" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd14", 00:29:54.422 "bdev_name": "Nvme3n1" 00:29:54.422 } 00:29:54.422 ]' 00:29:54.422 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd0", 00:29:54.422 "bdev_name": "Nvme0n1" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd1", 00:29:54.422 "bdev_name": "Nvme1n1p1" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd10", 00:29:54.422 "bdev_name": "Nvme1n1p2" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd11", 00:29:54.422 "bdev_name": "Nvme2n1" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd12", 00:29:54.422 "bdev_name": "Nvme2n2" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd13", 00:29:54.422 "bdev_name": "Nvme2n3" 00:29:54.422 }, 00:29:54.422 { 00:29:54.422 "nbd_device": "/dev/nbd14", 00:29:54.422 "bdev_name": "Nvme3n1" 00:29:54.422 } 00:29:54.422 ]' 00:29:54.422 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:54.422 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:54.422 /dev/nbd1 00:29:54.422 /dev/nbd10 00:29:54.422 /dev/nbd11 00:29:54.422 /dev/nbd12 00:29:54.422 /dev/nbd13 00:29:54.422 /dev/nbd14' 00:29:54.422 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:54.422 /dev/nbd1 00:29:54.422 /dev/nbd10 00:29:54.422 /dev/nbd11 00:29:54.422 /dev/nbd12 00:29:54.422 /dev/nbd13 00:29:54.422 /dev/nbd14' 00:29:54.422 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:54.423 256+0 records in 00:29:54.423 256+0 records out 00:29:54.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643857 s, 163 MB/s 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:54.423 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:54.687 256+0 records in 00:29:54.687 256+0 records out 00:29:54.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1015 s, 10.3 MB/s 00:29:54.687 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:54.687 17:26:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:54.687 256+0 records in 00:29:54.687 256+0 records out 00:29:54.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103909 s, 10.1 MB/s 00:29:54.687 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:54.687 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:54.687 256+0 records in 00:29:54.687 256+0 records out 00:29:54.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105201 s, 10.0 MB/s 00:29:54.687 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:54.687 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:54.953 256+0 records in 00:29:54.953 256+0 records out 00:29:54.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0965109 s, 10.9 MB/s 00:29:54.953 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:54.953 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:54.953 256+0 records in 00:29:54.953 256+0 records out 00:29:54.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0973493 s, 10.8 MB/s 00:29:54.953 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:54.953 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:55.213 256+0 records in 00:29:55.213 256+0 records out 00:29:55.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0947044 s, 11.1 MB/s 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:29:55.213 256+0 records in 00:29:55.213 256+0 records out 00:29:55.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0980401 s, 10.7 MB/s 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.213 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.472 17:26:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.732 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:55.990 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.991 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.249 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.508 17:26:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.766 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:57.026 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:57.284 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:57.541 malloc_lvol_verify 00:29:57.541 17:26:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:57.798 8ed1ffcf-b6d3-4a64-a72e-1a35df4e4353 00:29:57.798 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:58.055 aa630c66-7166-4f90-b5da-41705f0a3aed 00:29:58.055 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:58.313 /dev/nbd0 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:58.313 mke2fs 1.47.0 (5-Feb-2023) 00:29:58.313 Discarding device blocks: 0/4096 done 00:29:58.313 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:58.313 00:29:58.313 Allocating group tables: 0/1 done 00:29:58.313 Writing inode tables: 0/1 done 00:29:58.313 Creating journal (1024 blocks): done 00:29:58.313 Writing superblocks and filesystem accounting information: 0/1 done 00:29:58.313 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:58.313 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63072 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63072 ']' 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63072 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63072 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:58.573 killing process with pid 63072 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63072' 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63072 00:29:58.573 17:26:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63072 00:29:59.951 17:26:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:59.951 00:29:59.951 real 0m13.855s 00:29:59.951 user 0m18.761s 00:29:59.951 sys 0m5.210s 00:29:59.951 17:26:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.951 17:26:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:59.951 ************************************ 00:29:59.951 END TEST bdev_nbd 00:29:59.951 ************************************ 00:30:00.209 17:26:37 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:30:00.209 17:26:37 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:30:00.209 17:26:37 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:30:00.209 17:26:37 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:00.209 skipping fio tests on NVMe due to multi-ns failures. 00:30:00.209 17:26:37 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:00.209 17:26:37 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:00.209 17:26:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:00.209 17:26:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.209 17:26:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:00.209 ************************************ 00:30:00.209 START TEST bdev_verify 00:30:00.209 ************************************ 00:30:00.209 17:26:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:00.209 [2024-11-26 17:26:37.526739] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:00.209 [2024-11-26 17:26:37.526899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:30:00.467 [2024-11-26 17:26:37.715751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:00.467 [2024-11-26 17:26:37.881560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.467 [2024-11-26 17:26:37.881572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.404 Running I/O for 5 seconds... 00:30:03.732 16128.00 IOPS, 63.00 MiB/s [2024-11-26T17:26:42.111Z] 17344.00 IOPS, 67.75 MiB/s [2024-11-26T17:26:43.046Z] 17322.67 IOPS, 67.67 MiB/s [2024-11-26T17:26:43.980Z] 16624.00 IOPS, 64.94 MiB/s [2024-11-26T17:26:43.980Z] 16627.20 IOPS, 64.95 MiB/s 00:30:06.534 Latency(us) 00:30:06.534 [2024-11-26T17:26:43.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.534 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.534 Verification LBA range: start 0x0 length 0xbd0bd 00:30:06.534 Nvme0n1 : 5.09 1256.66 4.91 0.00 0.00 101582.99 21292.05 119968.08 00:30:06.534 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.534 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:30:06.534 Nvme0n1 : 5.08 1083.96 4.23 0.00 0.00 117591.01 25069.67 122715.44 00:30:06.535 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x0 length 0x4ff80 00:30:06.535 Nvme1n1p1 : 5.09 1256.15 4.91 0.00 0.00 101388.20 20376.26 118136.51 00:30:06.535 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:06.535 Nvme1n1p1 : 5.08 1083.48 4.23 0.00 0.00 117352.36 28274.92 118136.51 00:30:06.535 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x0 length 0x4ff7f 00:30:06.535 Nvme1n1p2 : 5.10 1255.12 4.90 0.00 0.00 101183.21 22207.83 122715.44 00:30:06.535 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:06.535 Nvme1n1p2 : 5.08 1082.98 4.23 0.00 0.00 117093.60 30449.91 112183.90 00:30:06.535 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x0 length 0x80000 00:30:06.535 Nvme2n1 : 5.10 1254.26 4.90 0.00 0.00 100991.90 24726.25 123631.23 00:30:06.535 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x80000 length 0x80000 00:30:06.535 Nvme2n1 : 5.08 1082.60 4.23 0.00 0.00 116854.23 28732.81 108062.85 00:30:06.535 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x0 length 0x80000 00:30:06.535 Nvme2n2 : 5.10 1253.72 4.90 0.00 0.00 100761.22 25298.61 120883.87 00:30:06.535 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x80000 length 0x80000 00:30:06.535 Nvme2n2 : 5.10 1091.93 4.27 0.00 0.00 115780.27 4321.37 111726.00 00:30:06.535 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x0 length 0x80000 00:30:06.535 Nvme2n3 : 5.11 1253.24 4.90 0.00 0.00 100557.16 22665.73 119968.08 00:30:06.535 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x80000 length 0x80000 00:30:06.535 Nvme2n3 : 5.10 1091.09 4.26 0.00 0.00 115595.19 7498.01 116762.83 00:30:06.535 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x0 length 0x20000 00:30:06.535 Nvme3n1 : 5.11 1252.57 4.89 0.00 0.00 100357.32 15453.90 122715.44 00:30:06.535 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:06.535 Verification LBA range: start 0x20000 length 0x20000 00:30:06.535 Nvme3n1 : 5.11 1090.45 4.26 0.00 0.00 115389.35 9444.05 122715.44 00:30:06.535 [2024-11-26T17:26:43.981Z] =================================================================================================================== 00:30:06.535 [2024-11-26T17:26:43.981Z] Total : 16388.21 64.02 0.00 0.00 108180.37 4321.37 123631.23 00:30:09.065 00:30:09.065 real 0m8.756s 00:30:09.065 user 0m16.092s 00:30:09.065 sys 0m0.423s 00:30:09.065 17:26:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.065 17:26:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:09.065 ************************************ 00:30:09.065 END TEST bdev_verify 00:30:09.065 ************************************ 00:30:09.065 17:26:46 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:09.065 17:26:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:09.065 17:26:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.065 17:26:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.065 ************************************ 00:30:09.065 START TEST bdev_verify_big_io 00:30:09.065 ************************************ 00:30:09.065 17:26:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:09.065 [2024-11-26 17:26:46.349406] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:09.065 [2024-11-26 17:26:46.349572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63627 ] 00:30:09.324 [2024-11-26 17:26:46.536930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:09.324 [2024-11-26 17:26:46.680638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.324 [2024-11-26 17:26:46.680731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.261 Running I/O for 5 seconds... 00:30:15.494 1382.00 IOPS, 86.38 MiB/s [2024-11-26T17:26:53.876Z] 2434.50 IOPS, 152.16 MiB/s [2024-11-26T17:26:53.877Z] 3157.33 IOPS, 197.33 MiB/s 00:30:16.431 Latency(us) 00:30:16.431 [2024-11-26T17:26:53.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.431 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0xbd0b 00:30:16.431 Nvme0n1 : 5.72 134.32 8.39 0.00 0.00 906331.52 19117.05 1289427.95 00:30:16.431 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0xbd0b length 0xbd0b 00:30:16.431 Nvme0n1 : 5.75 94.70 5.92 0.00 0.00 1297643.36 30449.91 1267449.07 00:30:16.431 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0x4ff8 00:30:16.431 Nvme1n1p1 : 5.80 137.49 8.59 0.00 0.00 843285.67 81962.93 967070.97 00:30:16.431 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:16.431 Nvme1n1p1 : 5.71 93.25 5.83 0.00 0.00 1295791.55 108062.85 1274775.36 00:30:16.431 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0x4ff7 00:30:16.431 Nvme1n1p2 : 5.86 140.64 8.79 0.00 0.00 798334.17 51970.91 732629.52 00:30:16.431 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:16.431 Nvme1n1p2 : 5.78 97.10 6.07 0.00 0.00 1245361.22 46018.29 2051362.66 00:30:16.431 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0x8000 00:30:16.431 Nvme2n1 : 5.95 149.64 9.35 0.00 0.00 732864.70 21635.47 945092.08 00:30:16.431 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x8000 length 0x8000 00:30:16.431 Nvme2n1 : 5.78 100.33 6.27 0.00 0.00 1185989.63 47163.03 1714353.08 00:30:16.431 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0x8000 00:30:16.431 Nvme2n2 : 5.95 146.23 9.14 0.00 0.00 739796.01 28732.81 1413974.97 00:30:16.431 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x8000 length 0x8000 00:30:16.431 Nvme2n2 : 5.77 99.87 6.24 0.00 0.00 1170862.96 46247.24 1164880.94 00:30:16.431 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0x8000 00:30:16.431 Nvme2n3 : 6.08 165.88 10.37 0.00 0.00 635579.54 17857.84 1435953.86 00:30:16.431 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x8000 length 0x8000 00:30:16.431 Nvme2n3 : 5.79 105.45 6.59 0.00 0.00 1088016.48 12363.12 1245470.18 00:30:16.431 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x0 length 0x2000 00:30:16.431 Nvme3n1 : 6.15 202.76 12.67 0.00 0.00 508130.86 980.18 1465259.04 00:30:16.431 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:16.431 Verification LBA range: start 0x2000 length 0x2000 00:30:16.431 Nvme3n1 : 5.80 110.33 6.90 0.00 0.00 1011611.97 8127.61 1333385.73 00:30:16.431 [2024-11-26T17:26:53.877Z] =================================================================================================================== 00:30:16.431 [2024-11-26T17:26:53.877Z] Total : 1777.99 111.12 0.00 0.00 896350.24 980.18 2051362.66 00:30:19.718 00:30:19.718 real 0m10.312s 00:30:19.718 user 0m19.243s 00:30:19.718 sys 0m0.378s 00:30:19.718 17:26:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.718 17:26:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.718 ************************************ 00:30:19.718 END TEST bdev_verify_big_io 00:30:19.718 ************************************ 00:30:19.718 17:26:56 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:19.718 17:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:19.718 17:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.718 17:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:19.718 ************************************ 00:30:19.718 START TEST bdev_write_zeroes 00:30:19.718 ************************************ 00:30:19.718 17:26:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:19.718 [2024-11-26 17:26:56.698088] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:19.718 [2024-11-26 17:26:56.698243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63758 ] 00:30:19.718 [2024-11-26 17:26:56.886177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.718 [2024-11-26 17:26:57.047849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.653 Running I/O for 1 seconds... 00:30:21.586 49280.00 IOPS, 192.50 MiB/s 00:30:21.586 Latency(us) 00:30:21.586 [2024-11-26T17:26:59.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.586 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme0n1 : 1.02 7061.33 27.58 0.00 0.00 18083.84 14194.70 29305.18 00:30:21.586 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme1n1p1 : 1.03 7052.55 27.55 0.00 0.00 18073.21 13908.51 29305.18 00:30:21.586 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme1n1p2 : 1.03 7044.25 27.52 0.00 0.00 18031.14 13851.28 28847.29 00:30:21.586 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme2n1 : 1.03 7036.76 27.49 0.00 0.00 17973.70 14194.70 28389.39 00:30:21.586 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme2n2 : 1.03 7028.70 27.46 0.00 0.00 17942.83 12477.60 28503.87 00:30:21.586 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme2n3 : 1.03 7020.91 27.43 0.00 0.00 17926.93 11619.05 28503.87 00:30:21.586 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:21.586 Nvme3n1 : 1.03 7013.39 27.40 0.00 0.00 17894.37 10417.08 28389.39 00:30:21.586 [2024-11-26T17:26:59.032Z] =================================================================================================================== 00:30:21.586 [2024-11-26T17:26:59.032Z] Total : 49257.88 192.41 0.00 0.00 17989.43 10417.08 29305.18 00:30:23.493 00:30:23.493 real 0m3.851s 00:30:23.493 user 0m3.384s 00:30:23.493 sys 0m0.344s 00:30:23.493 17:27:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.493 17:27:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:23.493 ************************************ 00:30:23.493 END TEST bdev_write_zeroes 00:30:23.493 ************************************ 00:30:23.493 17:27:00 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.493 17:27:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:23.493 17:27:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.493 17:27:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:23.493 ************************************ 00:30:23.493 START TEST bdev_json_nonenclosed 00:30:23.493 ************************************ 00:30:23.493 17:27:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.493 [2024-11-26 17:27:00.615397] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:23.493 [2024-11-26 17:27:00.615551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63821 ] 00:30:23.493 [2024-11-26 17:27:00.800954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.753 [2024-11-26 17:27:00.956674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.753 [2024-11-26 17:27:00.956824] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:23.753 [2024-11-26 17:27:00.956848] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:23.753 [2024-11-26 17:27:00.956860] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:24.013 00:30:24.013 real 0m0.771s 00:30:24.013 user 0m0.510s 00:30:24.013 sys 0m0.155s 00:30:24.013 17:27:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.013 17:27:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:24.013 ************************************ 00:30:24.013 END TEST bdev_json_nonenclosed 00:30:24.013 ************************************ 00:30:24.013 17:27:01 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:24.013 17:27:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:24.013 17:27:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.013 17:27:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:24.013 ************************************ 00:30:24.013 START TEST bdev_json_nonarray 00:30:24.013 ************************************ 00:30:24.013 17:27:01 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:24.013 [2024-11-26 17:27:01.448175] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:24.013 [2024-11-26 17:27:01.448362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63848 ] 00:30:24.273 [2024-11-26 17:27:01.633044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.532 [2024-11-26 17:27:01.777739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.532 [2024-11-26 17:27:01.777873] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:24.532 [2024-11-26 17:27:01.777894] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:24.532 [2024-11-26 17:27:01.777905] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:24.790 00:30:24.791 real 0m0.721s 00:30:24.791 user 0m0.469s 00:30:24.791 sys 0m0.147s 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:24.791 ************************************ 00:30:24.791 END TEST bdev_json_nonarray 00:30:24.791 ************************************ 00:30:24.791 17:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:30:24.791 17:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:30:24.791 17:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:24.791 17:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.791 17:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.791 17:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:24.791 ************************************ 00:30:24.791 START TEST bdev_gpt_uuid 00:30:24.791 ************************************ 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63879 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63879 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63879 ']' 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.791 17:27:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:25.049 [2024-11-26 17:27:02.245174] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:25.049 [2024-11-26 17:27:02.245355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63879 ] 00:30:25.049 [2024-11-26 17:27:02.431313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.308 [2024-11-26 17:27:02.573696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.244 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.244 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:30:26.244 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:26.244 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.244 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.811 Some configs were skipped because the RPC state that can call them passed over. 00:30:26.811 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.811 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:30:26.811 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.811 17:27:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:30:26.811 { 00:30:26.811 "name": "Nvme1n1p1", 00:30:26.811 "aliases": [ 00:30:26.811 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:26.811 ], 00:30:26.811 "product_name": "GPT Disk", 00:30:26.811 "block_size": 4096, 00:30:26.811 "num_blocks": 655104, 00:30:26.811 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:26.811 "assigned_rate_limits": { 00:30:26.811 "rw_ios_per_sec": 0, 00:30:26.811 "rw_mbytes_per_sec": 0, 00:30:26.811 "r_mbytes_per_sec": 0, 00:30:26.811 "w_mbytes_per_sec": 0 00:30:26.811 }, 00:30:26.811 "claimed": false, 00:30:26.811 "zoned": false, 00:30:26.811 "supported_io_types": { 00:30:26.811 "read": true, 00:30:26.811 "write": true, 00:30:26.811 "unmap": true, 00:30:26.811 "flush": true, 00:30:26.811 "reset": true, 00:30:26.811 "nvme_admin": false, 00:30:26.811 "nvme_io": false, 00:30:26.811 "nvme_io_md": false, 00:30:26.811 "write_zeroes": true, 00:30:26.811 "zcopy": false, 00:30:26.811 "get_zone_info": false, 00:30:26.811 "zone_management": false, 00:30:26.811 "zone_append": false, 00:30:26.811 "compare": true, 00:30:26.811 "compare_and_write": false, 00:30:26.811 "abort": true, 00:30:26.811 "seek_hole": false, 00:30:26.811 "seek_data": false, 00:30:26.811 "copy": true, 00:30:26.811 "nvme_iov_md": false 00:30:26.811 }, 00:30:26.811 "driver_specific": { 00:30:26.811 "gpt": { 00:30:26.811 "base_bdev": "Nvme1n1", 00:30:26.811 "offset_blocks": 256, 00:30:26.811 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:26.811 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:26.811 "partition_name": "SPDK_TEST_first" 00:30:26.811 } 00:30:26.811 } 00:30:26.811 } 00:30:26.811 ]' 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:30:26.811 { 00:30:26.811 "name": "Nvme1n1p2", 00:30:26.811 "aliases": [ 00:30:26.811 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:26.811 ], 00:30:26.811 "product_name": "GPT Disk", 00:30:26.811 "block_size": 4096, 00:30:26.811 "num_blocks": 655103, 00:30:26.811 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:26.811 "assigned_rate_limits": { 00:30:26.811 "rw_ios_per_sec": 0, 00:30:26.811 "rw_mbytes_per_sec": 0, 00:30:26.811 "r_mbytes_per_sec": 0, 00:30:26.811 "w_mbytes_per_sec": 0 00:30:26.811 }, 00:30:26.811 "claimed": false, 00:30:26.811 "zoned": false, 00:30:26.811 "supported_io_types": { 00:30:26.811 "read": true, 00:30:26.811 "write": true, 00:30:26.811 "unmap": true, 00:30:26.811 "flush": true, 00:30:26.811 "reset": true, 00:30:26.811 "nvme_admin": false, 00:30:26.811 "nvme_io": false, 00:30:26.811 "nvme_io_md": false, 00:30:26.811 "write_zeroes": true, 00:30:26.811 "zcopy": false, 00:30:26.811 "get_zone_info": false, 00:30:26.811 "zone_management": false, 00:30:26.811 "zone_append": false, 00:30:26.811 "compare": true, 00:30:26.811 "compare_and_write": false, 00:30:26.811 "abort": true, 00:30:26.811 "seek_hole": false, 00:30:26.811 "seek_data": false, 00:30:26.811 "copy": true, 00:30:26.811 "nvme_iov_md": false 00:30:26.811 }, 00:30:26.811 "driver_specific": { 00:30:26.811 "gpt": { 00:30:26.811 "base_bdev": "Nvme1n1", 00:30:26.811 "offset_blocks": 655360, 00:30:26.811 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:26.811 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:26.811 "partition_name": "SPDK_TEST_second" 00:30:26.811 } 00:30:26.811 } 00:30:26.811 } 00:30:26.811 ]' 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:30:26.811 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63879 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63879 ']' 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63879 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63879 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63879' 00:30:27.086 killing process with pid 63879 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63879 00:30:27.086 17:27:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63879 00:30:30.373 00:30:30.374 real 0m5.295s 00:30:30.374 user 0m5.271s 00:30:30.374 sys 0m0.723s 00:30:30.374 17:27:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.374 17:27:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:30.374 ************************************ 00:30:30.374 END TEST bdev_gpt_uuid 00:30:30.374 ************************************ 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:30:30.374 17:27:07 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:30.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:30.892 Waiting for block devices as requested 00:30:31.151 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:31.151 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:31.151 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:31.411 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:36.687 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:36.687 17:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:30:36.687 17:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:30:36.687 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:36.687 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:36.687 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:36.687 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:36.687 17:27:14 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:30:36.687 00:30:36.687 real 1m11.856s 00:30:36.687 user 1m31.299s 00:30:36.687 sys 0m12.576s 00:30:36.687 17:27:14 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.687 17:27:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:36.687 ************************************ 00:30:36.687 END TEST blockdev_nvme_gpt 00:30:36.687 ************************************ 00:30:36.687 17:27:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:36.687 17:27:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:36.687 17:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.687 17:27:14 -- common/autotest_common.sh@10 -- # set +x 00:30:36.687 ************************************ 00:30:36.687 START TEST nvme 00:30:36.687 ************************************ 00:30:36.687 17:27:14 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:36.946 * Looking for test storage... 00:30:36.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:36.946 17:27:14 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.946 17:27:14 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.947 17:27:14 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.947 17:27:14 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.947 17:27:14 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.947 17:27:14 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.947 17:27:14 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.947 17:27:14 nvme -- scripts/common.sh@344 -- # case "$op" in 00:30:36.947 17:27:14 nvme -- scripts/common.sh@345 -- # : 1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.947 17:27:14 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.947 17:27:14 nvme -- scripts/common.sh@365 -- # decimal 1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@353 -- # local d=1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.947 17:27:14 nvme -- scripts/common.sh@355 -- # echo 1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.947 17:27:14 nvme -- scripts/common.sh@366 -- # decimal 2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@353 -- # local d=2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.947 17:27:14 nvme -- scripts/common.sh@355 -- # echo 2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.947 17:27:14 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.947 17:27:14 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.947 17:27:14 nvme -- scripts/common.sh@368 -- # return 0 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.947 --rc genhtml_branch_coverage=1 00:30:36.947 --rc genhtml_function_coverage=1 00:30:36.947 --rc genhtml_legend=1 00:30:36.947 --rc geninfo_all_blocks=1 00:30:36.947 --rc geninfo_unexecuted_blocks=1 00:30:36.947 00:30:36.947 ' 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.947 --rc genhtml_branch_coverage=1 00:30:36.947 --rc genhtml_function_coverage=1 00:30:36.947 --rc genhtml_legend=1 00:30:36.947 --rc geninfo_all_blocks=1 00:30:36.947 --rc geninfo_unexecuted_blocks=1 00:30:36.947 00:30:36.947 ' 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.947 --rc genhtml_branch_coverage=1 00:30:36.947 --rc genhtml_function_coverage=1 00:30:36.947 --rc genhtml_legend=1 00:30:36.947 --rc geninfo_all_blocks=1 00:30:36.947 --rc geninfo_unexecuted_blocks=1 00:30:36.947 00:30:36.947 ' 00:30:36.947 17:27:14 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.947 --rc genhtml_branch_coverage=1 00:30:36.947 --rc genhtml_function_coverage=1 00:30:36.947 --rc genhtml_legend=1 00:30:36.947 --rc geninfo_all_blocks=1 00:30:36.947 --rc geninfo_unexecuted_blocks=1 00:30:36.947 00:30:36.947 ' 00:30:36.947 17:27:14 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:37.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:38.456 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:38.456 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:38.456 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:38.456 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:38.456 17:27:15 nvme -- nvme/nvme.sh@79 -- # uname 00:30:38.456 17:27:15 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:38.456 17:27:15 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:38.456 17:27:15 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1075 -- # stubpid=64549 00:30:38.456 Waiting for stub to ready for secondary processes... 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64549 ]] 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:30:38.456 17:27:15 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:38.715 [2024-11-26 17:27:15.917838] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:38.715 [2024-11-26 17:27:15.918008] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:30:39.653 17:27:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:39.653 17:27:16 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64549 ]] 00:30:39.653 17:27:16 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:30:40.591 [2024-11-26 17:27:17.729936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:40.591 17:27:17 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:40.591 17:27:17 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64549 ]] 00:30:40.591 17:27:17 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:30:40.591 [2024-11-26 17:27:17.871267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.591 [2024-11-26 17:27:17.871370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.591 [2024-11-26 17:27:17.871392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.591 [2024-11-26 17:27:17.891559] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:30:40.591 [2024-11-26 17:27:17.891737] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:40.591 [2024-11-26 17:27:17.908028] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:40.591 [2024-11-26 17:27:17.908319] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:40.591 [2024-11-26 17:27:17.911889] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:40.591 [2024-11-26 17:27:17.912234] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:30:40.591 [2024-11-26 17:27:17.912427] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:30:40.591 [2024-11-26 17:27:17.916703] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:40.591 [2024-11-26 17:27:17.917054] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:30:40.591 [2024-11-26 17:27:17.917226] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:30:40.591 [2024-11-26 17:27:17.920657] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:40.591 [2024-11-26 17:27:17.921025] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:30:40.591 [2024-11-26 17:27:17.921193] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:30:40.591 [2024-11-26 17:27:17.921322] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:30:40.591 [2024-11-26 17:27:17.921431] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:30:41.527 17:27:18 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:41.527 done. 00:30:41.527 17:27:18 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:30:41.527 17:27:18 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:41.527 17:27:18 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:30:41.527 17:27:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.527 17:27:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:41.527 ************************************ 00:30:41.527 START TEST nvme_reset 00:30:41.527 ************************************ 00:30:41.527 17:27:18 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:41.786 Initializing NVMe Controllers 00:30:41.786 Skipping QEMU NVMe SSD at 0000:00:10.0 00:30:41.786 Skipping QEMU NVMe SSD at 0000:00:11.0 00:30:41.786 Skipping QEMU NVMe SSD at 0000:00:13.0 00:30:41.786 Skipping QEMU NVMe SSD at 0000:00:12.0 00:30:41.786 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:41.786 00:30:41.786 real 0m0.325s 00:30:41.786 user 0m0.102s 00:30:41.786 sys 0m0.170s 00:30:41.786 17:27:19 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.786 17:27:19 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:30:41.786 ************************************ 00:30:41.786 END TEST nvme_reset 00:30:41.786 ************************************ 00:30:42.045 17:27:19 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:42.045 17:27:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:42.045 17:27:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.045 17:27:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:42.045 ************************************ 00:30:42.045 START TEST nvme_identify 00:30:42.045 ************************************ 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:30:42.045 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:30:42.045 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:42.045 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:42.045 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:30:42.045 17:27:19 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:42.045 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:42.308 [2024-11-26 17:27:19.695375] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64589 terminated unexpected 00:30:42.308 ===================================================== 00:30:42.308 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:42.308 ===================================================== 00:30:42.308 Controller Capabilities/Features 00:30:42.308 ================================ 00:30:42.308 Vendor ID: 1b36 00:30:42.308 Subsystem Vendor ID: 1af4 00:30:42.308 Serial Number: 12340 00:30:42.308 Model Number: QEMU NVMe Ctrl 00:30:42.308 Firmware Version: 8.0.0 00:30:42.308 Recommended Arb Burst: 6 00:30:42.308 IEEE OUI Identifier: 00 54 52 00:30:42.308 Multi-path I/O 00:30:42.308 May have multiple subsystem ports: No 00:30:42.308 May have multiple controllers: No 00:30:42.308 Associated with SR-IOV VF: No 00:30:42.308 Max Data Transfer Size: 524288 00:30:42.308 Max Number of Namespaces: 256 00:30:42.308 Max Number of I/O Queues: 64 00:30:42.308 NVMe Specification Version (VS): 1.4 00:30:42.308 NVMe Specification Version (Identify): 1.4 00:30:42.308 Maximum Queue Entries: 2048 00:30:42.308 Contiguous Queues Required: Yes 00:30:42.308 Arbitration Mechanisms Supported 00:30:42.308 Weighted Round Robin: Not Supported 00:30:42.308 Vendor Specific: Not Supported 00:30:42.308 Reset Timeout: 7500 ms 00:30:42.308 Doorbell Stride: 4 bytes 00:30:42.308 NVM Subsystem Reset: Not Supported 00:30:42.308 Command Sets Supported 00:30:42.308 NVM Command Set: Supported 00:30:42.308 Boot Partition: Not Supported 00:30:42.308 Memory Page Size Minimum: 4096 bytes 00:30:42.308 Memory Page Size Maximum: 65536 bytes 00:30:42.308 Persistent Memory Region: Not Supported 00:30:42.308 Optional Asynchronous Events Supported 00:30:42.308 Namespace Attribute Notices: Supported 00:30:42.308 Firmware Activation Notices: Not Supported 00:30:42.308 ANA Change Notices: Not Supported 00:30:42.308 PLE Aggregate Log Change Notices: Not Supported 00:30:42.308 LBA Status Info Alert Notices: Not Supported 00:30:42.308 EGE Aggregate Log Change Notices: Not Supported 00:30:42.308 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.308 Zone Descriptor Change Notices: Not Supported 00:30:42.308 Discovery Log Change Notices: Not Supported 00:30:42.308 Controller Attributes 00:30:42.308 128-bit Host Identifier: Not Supported 00:30:42.308 Non-Operational Permissive Mode: Not Supported 00:30:42.308 NVM Sets: Not Supported 00:30:42.308 Read Recovery Levels: Not Supported 00:30:42.308 Endurance Groups: Not Supported 00:30:42.308 Predictable Latency Mode: Not Supported 00:30:42.308 Traffic Based Keep ALive: Not Supported 00:30:42.308 Namespace Granularity: Not Supported 00:30:42.308 SQ Associations: Not Supported 00:30:42.308 UUID List: Not Supported 00:30:42.308 Multi-Domain Subsystem: Not Supported 00:30:42.308 Fixed Capacity Management: Not Supported 00:30:42.308 Variable Capacity Management: Not Supported 00:30:42.308 Delete Endurance Group: Not Supported 00:30:42.308 Delete NVM Set: Not Supported 00:30:42.308 Extended LBA Formats Supported: Supported 00:30:42.308 Flexible Data Placement Supported: Not Supported 00:30:42.308 00:30:42.308 Controller Memory Buffer Support 00:30:42.308 ================================ 00:30:42.308 Supported: No 00:30:42.308 00:30:42.308 Persistent Memory Region Support 00:30:42.308 ================================ 00:30:42.308 Supported: No 00:30:42.308 00:30:42.308 Admin Command Set Attributes 00:30:42.308 ============================ 00:30:42.308 Security Send/Receive: Not Supported 00:30:42.308 Format NVM: Supported 00:30:42.308 Firmware Activate/Download: Not Supported 00:30:42.308 Namespace Management: Supported 00:30:42.308 Device Self-Test: Not Supported 00:30:42.308 Directives: Supported 00:30:42.308 NVMe-MI: Not Supported 00:30:42.308 Virtualization Management: Not Supported 00:30:42.308 Doorbell Buffer Config: Supported 00:30:42.308 Get LBA Status Capability: Not Supported 00:30:42.308 Command & Feature Lockdown Capability: Not Supported 00:30:42.308 Abort Command Limit: 4 00:30:42.308 Async Event Request Limit: 4 00:30:42.308 Number of Firmware Slots: N/A 00:30:42.308 Firmware Slot 1 Read-Only: N/A 00:30:42.308 Firmware Activation Without Reset: N/A 00:30:42.308 Multiple Update Detection Support: N/A 00:30:42.308 Firmware Update Granularity: No Information Provided 00:30:42.308 Per-Namespace SMART Log: Yes 00:30:42.308 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.308 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:42.308 Command Effects Log Page: Supported 00:30:42.308 Get Log Page Extended Data: Supported 00:30:42.308 Telemetry Log Pages: Not Supported 00:30:42.308 Persistent Event Log Pages: Not Supported 00:30:42.308 Supported Log Pages Log Page: May Support 00:30:42.308 Commands Supported & Effects Log Page: Not Supported 00:30:42.308 Feature Identifiers & Effects Log Page:May Support 00:30:42.308 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.309 Data Area 4 for Telemetry Log: Not Supported 00:30:42.309 Error Log Page Entries Supported: 1 00:30:42.309 Keep Alive: Not Supported 00:30:42.309 00:30:42.309 NVM Command Set Attributes 00:30:42.309 ========================== 00:30:42.309 Submission Queue Entry Size 00:30:42.309 Max: 64 00:30:42.309 Min: 64 00:30:42.309 Completion Queue Entry Size 00:30:42.309 Max: 16 00:30:42.309 Min: 16 00:30:42.309 Number of Namespaces: 256 00:30:42.309 Compare Command: Supported 00:30:42.309 Write Uncorrectable Command: Not Supported 00:30:42.309 Dataset Management Command: Supported 00:30:42.309 Write Zeroes Command: Supported 00:30:42.309 Set Features Save Field: Supported 00:30:42.309 Reservations: Not Supported 00:30:42.309 Timestamp: Supported 00:30:42.309 Copy: Supported 00:30:42.309 Volatile Write Cache: Present 00:30:42.309 Atomic Write Unit (Normal): 1 00:30:42.309 Atomic Write Unit (PFail): 1 00:30:42.309 Atomic Compare & Write Unit: 1 00:30:42.309 Fused Compare & Write: Not Supported 00:30:42.309 Scatter-Gather List 00:30:42.309 SGL Command Set: Supported 00:30:42.309 SGL Keyed: Not Supported 00:30:42.309 SGL Bit Bucket Descriptor: Not Supported 00:30:42.309 SGL Metadata Pointer: Not Supported 00:30:42.309 Oversized SGL: Not Supported 00:30:42.309 SGL Metadata Address: Not Supported 00:30:42.309 SGL Offset: Not Supported 00:30:42.309 Transport SGL Data Block: Not Supported 00:30:42.309 Replay Protected Memory Block: Not Supported 00:30:42.309 00:30:42.309 Firmware Slot Information 00:30:42.309 ========================= 00:30:42.309 Active slot: 1 00:30:42.309 Slot 1 Firmware Revision: 1.0 00:30:42.309 00:30:42.309 00:30:42.309 Commands Supported and Effects 00:30:42.309 ============================== 00:30:42.309 Admin Commands 00:30:42.309 -------------- 00:30:42.309 Delete I/O Submission Queue (00h): Supported 00:30:42.309 Create I/O Submission Queue (01h): Supported 00:30:42.309 Get Log Page (02h): Supported 00:30:42.309 Delete I/O Completion Queue (04h): Supported 00:30:42.309 Create I/O Completion Queue (05h): Supported 00:30:42.309 Identify (06h): Supported 00:30:42.309 Abort (08h): Supported 00:30:42.309 Set Features (09h): Supported 00:30:42.309 Get Features (0Ah): Supported 00:30:42.309 Asynchronous Event Request (0Ch): Supported 00:30:42.309 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:42.309 Directive Send (19h): Supported 00:30:42.309 Directive Receive (1Ah): Supported 00:30:42.309 Virtualization Management (1Ch): Supported 00:30:42.309 Doorbell Buffer Config (7Ch): Supported 00:30:42.309 Format NVM (80h): Supported LBA-Change 00:30:42.309 I/O Commands 00:30:42.309 ------------ 00:30:42.309 Flush (00h): Supported LBA-Change 00:30:42.309 Write (01h): Supported LBA-Change 00:30:42.309 Read (02h): Supported 00:30:42.309 Compare (05h): Supported 00:30:42.309 Write Zeroes (08h): Supported LBA-Change 00:30:42.309 Dataset Management (09h): Supported LBA-Change 00:30:42.309 Unknown (0Ch): Supported 00:30:42.309 Unknown (12h): Supported 00:30:42.309 Copy (19h): Supported LBA-Change 00:30:42.309 Unknown (1Dh): Supported LBA-Change 00:30:42.309 00:30:42.309 Error Log 00:30:42.309 ========= 00:30:42.309 00:30:42.309 Arbitration 00:30:42.309 =========== 00:30:42.309 Arbitration Burst: no limit 00:30:42.309 00:30:42.309 Power Management 00:30:42.309 ================ 00:30:42.309 Number of Power States: 1 00:30:42.309 Current Power State: Power State #0 00:30:42.309 Power State #0: 00:30:42.309 Max Power: 25.00 W 00:30:42.309 Non-Operational State: Operational 00:30:42.309 Entry Latency: 16 microseconds 00:30:42.309 Exit Latency: 4 microseconds 00:30:42.309 Relative Read Throughput: 0 00:30:42.309 Relative Read Latency: 0 00:30:42.309 Relative Write Throughput: 0 00:30:42.309 Relative Write Latency: 0 00:30:42.309 Idle Power[2024-11-26 17:27:19.697106] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64589 terminated unexpected 00:30:42.309 : Not Reported 00:30:42.309 Active Power: Not Reported 00:30:42.309 Non-Operational Permissive Mode: Not Supported 00:30:42.309 00:30:42.309 Health Information 00:30:42.309 ================== 00:30:42.309 Critical Warnings: 00:30:42.309 Available Spare Space: OK 00:30:42.309 Temperature: OK 00:30:42.309 Device Reliability: OK 00:30:42.309 Read Only: No 00:30:42.309 Volatile Memory Backup: OK 00:30:42.309 Current Temperature: 323 Kelvin (50 Celsius) 00:30:42.309 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:42.309 Available Spare: 0% 00:30:42.309 Available Spare Threshold: 0% 00:30:42.309 Life Percentage Used: 0% 00:30:42.309 Data Units Read: 685 00:30:42.309 Data Units Written: 613 00:30:42.309 Host Read Commands: 31413 00:30:42.309 Host Write Commands: 31199 00:30:42.309 Controller Busy Time: 0 minutes 00:30:42.309 Power Cycles: 0 00:30:42.309 Power On Hours: 0 hours 00:30:42.309 Unsafe Shutdowns: 0 00:30:42.309 Unrecoverable Media Errors: 0 00:30:42.309 Lifetime Error Log Entries: 0 00:30:42.309 Warning Temperature Time: 0 minutes 00:30:42.309 Critical Temperature Time: 0 minutes 00:30:42.309 00:30:42.309 Number of Queues 00:30:42.309 ================ 00:30:42.309 Number of I/O Submission Queues: 64 00:30:42.309 Number of I/O Completion Queues: 64 00:30:42.309 00:30:42.309 ZNS Specific Controller Data 00:30:42.309 ============================ 00:30:42.309 Zone Append Size Limit: 0 00:30:42.309 00:30:42.309 00:30:42.309 Active Namespaces 00:30:42.309 ================= 00:30:42.309 Namespace ID:1 00:30:42.309 Error Recovery Timeout: Unlimited 00:30:42.309 Command Set Identifier: NVM (00h) 00:30:42.309 Deallocate: Supported 00:30:42.309 Deallocated/Unwritten Error: Supported 00:30:42.309 Deallocated Read Value: All 0x00 00:30:42.309 Deallocate in Write Zeroes: Not Supported 00:30:42.309 Deallocated Guard Field: 0xFFFF 00:30:42.309 Flush: Supported 00:30:42.309 Reservation: Not Supported 00:30:42.309 Metadata Transferred as: Separate Metadata Buffer 00:30:42.309 Namespace Sharing Capabilities: Private 00:30:42.309 Size (in LBAs): 1548666 (5GiB) 00:30:42.309 Capacity (in LBAs): 1548666 (5GiB) 00:30:42.309 Utilization (in LBAs): 1548666 (5GiB) 00:30:42.309 Thin Provisioning: Not Supported 00:30:42.309 Per-NS Atomic Units: No 00:30:42.309 Maximum Single Source Range Length: 128 00:30:42.309 Maximum Copy Length: 128 00:30:42.309 Maximum Source Range Count: 128 00:30:42.309 NGUID/EUI64 Never Reused: No 00:30:42.309 Namespace Write Protected: No 00:30:42.309 Number of LBA Formats: 8 00:30:42.309 Current LBA Format: LBA Format #07 00:30:42.309 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.309 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.309 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.309 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.309 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.309 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.309 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.309 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.309 00:30:42.309 NVM Specific Namespace Data 00:30:42.309 =========================== 00:30:42.309 Logical Block Storage Tag Mask: 0 00:30:42.309 Protection Information Capabilities: 00:30:42.309 16b Guard Protection Information Storage Tag Support: No 00:30:42.309 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.309 Storage Tag Check Read Support: No 00:30:42.309 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.309 ===================================================== 00:30:42.309 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:42.309 ===================================================== 00:30:42.309 Controller Capabilities/Features 00:30:42.309 ================================ 00:30:42.309 Vendor ID: 1b36 00:30:42.309 Subsystem Vendor ID: 1af4 00:30:42.309 Serial Number: 12341 00:30:42.309 Model Number: QEMU NVMe Ctrl 00:30:42.309 Firmware Version: 8.0.0 00:30:42.309 Recommended Arb Burst: 6 00:30:42.309 IEEE OUI Identifier: 00 54 52 00:30:42.309 Multi-path I/O 00:30:42.309 May have multiple subsystem ports: No 00:30:42.309 May have multiple controllers: No 00:30:42.309 Associated with SR-IOV VF: No 00:30:42.309 Max Data Transfer Size: 524288 00:30:42.309 Max Number of Namespaces: 256 00:30:42.309 Max Number of I/O Queues: 64 00:30:42.310 NVMe Specification Version (VS): 1.4 00:30:42.310 NVMe Specification Version (Identify): 1.4 00:30:42.310 Maximum Queue Entries: 2048 00:30:42.310 Contiguous Queues Required: Yes 00:30:42.310 Arbitration Mechanisms Supported 00:30:42.310 Weighted Round Robin: Not Supported 00:30:42.310 Vendor Specific: Not Supported 00:30:42.310 Reset Timeout: 7500 ms 00:30:42.310 Doorbell Stride: 4 bytes 00:30:42.310 NVM Subsystem Reset: Not Supported 00:30:42.310 Command Sets Supported 00:30:42.310 NVM Command Set: Supported 00:30:42.310 Boot Partition: Not Supported 00:30:42.310 Memory Page Size Minimum: 4096 bytes 00:30:42.310 Memory Page Size Maximum: 65536 bytes 00:30:42.310 Persistent Memory Region: Not Supported 00:30:42.310 Optional Asynchronous Events Supported 00:30:42.310 Namespace Attribute Notices: Supported 00:30:42.310 Firmware Activation Notices: Not Supported 00:30:42.310 ANA Change Notices: Not Supported 00:30:42.310 PLE Aggregate Log Change Notices: Not Supported 00:30:42.310 LBA Status Info Alert Notices: Not Supported 00:30:42.310 EGE Aggregate Log Change Notices: Not Supported 00:30:42.310 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.310 Zone Descriptor Change Notices: Not Supported 00:30:42.310 Discovery Log Change Notices: Not Supported 00:30:42.310 Controller Attributes 00:30:42.310 128-bit Host Identifier: Not Supported 00:30:42.310 Non-Operational Permissive Mode: Not Supported 00:30:42.310 NVM Sets: Not Supported 00:30:42.310 Read Recovery Levels: Not Supported 00:30:42.310 Endurance Groups: Not Supported 00:30:42.310 Predictable Latency Mode: Not Supported 00:30:42.310 Traffic Based Keep ALive: Not Supported 00:30:42.310 Namespace Granularity: Not Supported 00:30:42.310 SQ Associations: Not Supported 00:30:42.310 UUID List: Not Supported 00:30:42.310 Multi-Domain Subsystem: Not Supported 00:30:42.310 Fixed Capacity Management: Not Supported 00:30:42.310 Variable Capacity Management: Not Supported 00:30:42.310 Delete Endurance Group: Not Supported 00:30:42.310 Delete NVM Set: Not Supported 00:30:42.310 Extended LBA Formats Supported: Supported 00:30:42.310 Flexible Data Placement Supported: Not Supported 00:30:42.310 00:30:42.310 Controller Memory Buffer Support 00:30:42.310 ================================ 00:30:42.310 Supported: No 00:30:42.310 00:30:42.310 Persistent Memory Region Support 00:30:42.310 ================================ 00:30:42.310 Supported: No 00:30:42.310 00:30:42.310 Admin Command Set Attributes 00:30:42.310 ============================ 00:30:42.310 Security Send/Receive: Not Supported 00:30:42.310 Format NVM: Supported 00:30:42.310 Firmware Activate/Download: Not Supported 00:30:42.310 Namespace Management: Supported 00:30:42.310 Device Self-Test: Not Supported 00:30:42.310 Directives: Supported 00:30:42.310 NVMe-MI: Not Supported 00:30:42.310 Virtualization Management: Not Supported 00:30:42.310 Doorbell Buffer Config: Supported 00:30:42.310 Get LBA Status Capability: Not Supported 00:30:42.310 Command & Feature Lockdown Capability: Not Supported 00:30:42.310 Abort Command Limit: 4 00:30:42.310 Async Event Request Limit: 4 00:30:42.310 Number of Firmware Slots: N/A 00:30:42.310 Firmware Slot 1 Read-Only: N/A 00:30:42.310 Firmware Activation Without Reset: N/A 00:30:42.310 Multiple Update Detection Support: N/A 00:30:42.310 Firmware Update Granularity: No Information Provided 00:30:42.310 Per-Namespace SMART Log: Yes 00:30:42.310 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.310 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:42.310 Command Effects Log Page: Supported 00:30:42.310 Get Log Page Extended Data: Supported 00:30:42.310 Telemetry Log Pages: Not Supported 00:30:42.310 Persistent Event Log Pages: Not Supported 00:30:42.310 Supported Log Pages Log Page: May Support 00:30:42.310 Commands Supported & Effects Log Page: Not Supported 00:30:42.310 Feature Identifiers & Effects Log Page:May Support 00:30:42.310 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.310 Data Area 4 for Telemetry Log: Not Supported 00:30:42.310 Error Log Page Entries Supported: 1 00:30:42.310 Keep Alive: Not Supported 00:30:42.310 00:30:42.310 NVM Command Set Attributes 00:30:42.310 ========================== 00:30:42.310 Submission Queue Entry Size 00:30:42.310 Max: 64 00:30:42.310 Min: 64 00:30:42.310 Completion Queue Entry Size 00:30:42.310 Max: 16 00:30:42.310 Min: 16 00:30:42.310 Number of Namespaces: 256 00:30:42.310 Compare Command: Supported 00:30:42.310 Write Uncorrectable Command: Not Supported 00:30:42.310 Dataset Management Command: Supported 00:30:42.310 Write Zeroes Command: Supported 00:30:42.310 Set Features Save Field: Supported 00:30:42.310 Reservations: Not Supported 00:30:42.310 Timestamp: Supported 00:30:42.310 Copy: Supported 00:30:42.310 Volatile Write Cache: Present 00:30:42.310 Atomic Write Unit (Normal): 1 00:30:42.310 Atomic Write Unit (PFail): 1 00:30:42.310 Atomic Compare & Write Unit: 1 00:30:42.310 Fused Compare & Write: Not Supported 00:30:42.310 Scatter-Gather List 00:30:42.310 SGL Command Set: Supported 00:30:42.310 SGL Keyed: Not Supported 00:30:42.310 SGL Bit Bucket Descriptor: Not Supported 00:30:42.310 SGL Metadata Pointer: Not Supported 00:30:42.310 Oversized SGL: Not Supported 00:30:42.310 SGL Metadata Address: Not Supported 00:30:42.310 SGL Offset: Not Supported 00:30:42.310 Transport SGL Data Block: Not Supported 00:30:42.310 Replay Protected Memory Block: Not Supported 00:30:42.310 00:30:42.310 Firmware Slot Information 00:30:42.310 ========================= 00:30:42.310 Active slot: 1 00:30:42.310 Slot 1 Firmware Revision: 1.0 00:30:42.310 00:30:42.310 00:30:42.310 Commands Supported and Effects 00:30:42.310 ============================== 00:30:42.310 Admin Commands 00:30:42.310 -------------- 00:30:42.310 Delete I/O Submission Queue (00h): Supported 00:30:42.310 Create I/O Submission Queue (01h): Supported 00:30:42.310 Get Log Page (02h): Supported 00:30:42.310 Delete I/O Completion Queue (04h): Supported 00:30:42.310 Create I/O Completion Queue (05h): Supported 00:30:42.310 Identify (06h): Supported 00:30:42.310 Abort (08h): Supported 00:30:42.310 Set Features (09h): Supported 00:30:42.310 Get Features (0Ah): Supported 00:30:42.310 Asynchronous Event Request (0Ch): Supported 00:30:42.310 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:42.310 Directive Send (19h): Supported 00:30:42.310 Directive Receive (1Ah): Supported 00:30:42.310 Virtualization Management (1Ch): Supported 00:30:42.310 Doorbell Buffer Config (7Ch): Supported 00:30:42.310 Format NVM (80h): Supported LBA-Change 00:30:42.310 I/O Commands 00:30:42.310 ------------ 00:30:42.310 Flush (00h): Supported LBA-Change 00:30:42.310 Write (01h): Supported LBA-Change 00:30:42.310 Read (02h): Supported 00:30:42.310 Compare (05h): Supported 00:30:42.310 Write Zeroes (08h): Supported LBA-Change 00:30:42.310 Dataset Management (09h): Supported LBA-Change 00:30:42.310 Unknown (0Ch): Supported 00:30:42.310 Unknown (12h): Supported 00:30:42.310 Copy (19h): Supported LBA-Change 00:30:42.310 Unknown (1Dh): Supported LBA-Change 00:30:42.310 00:30:42.310 Error Log 00:30:42.310 ========= 00:30:42.310 00:30:42.310 Arbitration 00:30:42.310 =========== 00:30:42.310 Arbitration Burst: no limit 00:30:42.310 00:30:42.310 Power Management 00:30:42.310 ================ 00:30:42.310 Number of Power States: 1 00:30:42.310 Current Power State: Power State #0 00:30:42.310 Power State #0: 00:30:42.310 Max Power: 25.00 W 00:30:42.310 Non-Operational State: Operational 00:30:42.310 Entry Latency: 16 microseconds 00:30:42.310 Exit Latency: 4 microseconds 00:30:42.310 Relative Read Throughput: 0 00:30:42.310 Relative Read Latency: 0 00:30:42.310 Relative Write Throughput: 0 00:30:42.310 Relative Write Latency: 0 00:30:42.310 Idle Power: Not Reported 00:30:42.310 Active Power: Not Reported 00:30:42.310 Non-Operational Permissive Mode: Not Supported 00:30:42.310 00:30:42.310 Health Information 00:30:42.310 ================== 00:30:42.310 Critical Warnings: 00:30:42.310 Available Spare Space: OK 00:30:42.310 Temperature: [2024-11-26 17:27:19.698149] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64589 terminated unexpected 00:30:42.310 OK 00:30:42.310 Device Reliability: OK 00:30:42.310 Read Only: No 00:30:42.310 Volatile Memory Backup: OK 00:30:42.310 Current Temperature: 323 Kelvin (50 Celsius) 00:30:42.310 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:42.310 Available Spare: 0% 00:30:42.310 Available Spare Threshold: 0% 00:30:42.310 Life Percentage Used: 0% 00:30:42.310 Data Units Read: 1032 00:30:42.310 Data Units Written: 893 00:30:42.310 Host Read Commands: 45862 00:30:42.310 Host Write Commands: 44549 00:30:42.310 Controller Busy Time: 0 minutes 00:30:42.310 Power Cycles: 0 00:30:42.310 Power On Hours: 0 hours 00:30:42.311 Unsafe Shutdowns: 0 00:30:42.311 Unrecoverable Media Errors: 0 00:30:42.311 Lifetime Error Log Entries: 0 00:30:42.311 Warning Temperature Time: 0 minutes 00:30:42.311 Critical Temperature Time: 0 minutes 00:30:42.311 00:30:42.311 Number of Queues 00:30:42.311 ================ 00:30:42.311 Number of I/O Submission Queues: 64 00:30:42.311 Number of I/O Completion Queues: 64 00:30:42.311 00:30:42.311 ZNS Specific Controller Data 00:30:42.311 ============================ 00:30:42.311 Zone Append Size Limit: 0 00:30:42.311 00:30:42.311 00:30:42.311 Active Namespaces 00:30:42.311 ================= 00:30:42.311 Namespace ID:1 00:30:42.311 Error Recovery Timeout: Unlimited 00:30:42.311 Command Set Identifier: NVM (00h) 00:30:42.311 Deallocate: Supported 00:30:42.311 Deallocated/Unwritten Error: Supported 00:30:42.311 Deallocated Read Value: All 0x00 00:30:42.311 Deallocate in Write Zeroes: Not Supported 00:30:42.311 Deallocated Guard Field: 0xFFFF 00:30:42.311 Flush: Supported 00:30:42.311 Reservation: Not Supported 00:30:42.311 Namespace Sharing Capabilities: Private 00:30:42.311 Size (in LBAs): 1310720 (5GiB) 00:30:42.311 Capacity (in LBAs): 1310720 (5GiB) 00:30:42.311 Utilization (in LBAs): 1310720 (5GiB) 00:30:42.311 Thin Provisioning: Not Supported 00:30:42.311 Per-NS Atomic Units: No 00:30:42.311 Maximum Single Source Range Length: 128 00:30:42.311 Maximum Copy Length: 128 00:30:42.311 Maximum Source Range Count: 128 00:30:42.311 NGUID/EUI64 Never Reused: No 00:30:42.311 Namespace Write Protected: No 00:30:42.311 Number of LBA Formats: 8 00:30:42.311 Current LBA Format: LBA Format #04 00:30:42.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.311 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.311 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.311 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.311 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.311 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.311 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.311 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.311 00:30:42.311 NVM Specific Namespace Data 00:30:42.311 =========================== 00:30:42.311 Logical Block Storage Tag Mask: 0 00:30:42.311 Protection Information Capabilities: 00:30:42.311 16b Guard Protection Information Storage Tag Support: No 00:30:42.311 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.311 Storage Tag Check Read Support: No 00:30:42.311 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.311 ===================================================== 00:30:42.311 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:42.311 ===================================================== 00:30:42.311 Controller Capabilities/Features 00:30:42.311 ================================ 00:30:42.311 Vendor ID: 1b36 00:30:42.311 Subsystem Vendor ID: 1af4 00:30:42.311 Serial Number: 12343 00:30:42.311 Model Number: QEMU NVMe Ctrl 00:30:42.311 Firmware Version: 8.0.0 00:30:42.311 Recommended Arb Burst: 6 00:30:42.311 IEEE OUI Identifier: 00 54 52 00:30:42.311 Multi-path I/O 00:30:42.311 May have multiple subsystem ports: No 00:30:42.311 May have multiple controllers: Yes 00:30:42.311 Associated with SR-IOV VF: No 00:30:42.311 Max Data Transfer Size: 524288 00:30:42.311 Max Number of Namespaces: 256 00:30:42.311 Max Number of I/O Queues: 64 00:30:42.311 NVMe Specification Version (VS): 1.4 00:30:42.311 NVMe Specification Version (Identify): 1.4 00:30:42.311 Maximum Queue Entries: 2048 00:30:42.311 Contiguous Queues Required: Yes 00:30:42.311 Arbitration Mechanisms Supported 00:30:42.311 Weighted Round Robin: Not Supported 00:30:42.311 Vendor Specific: Not Supported 00:30:42.311 Reset Timeout: 7500 ms 00:30:42.311 Doorbell Stride: 4 bytes 00:30:42.311 NVM Subsystem Reset: Not Supported 00:30:42.311 Command Sets Supported 00:30:42.311 NVM Command Set: Supported 00:30:42.311 Boot Partition: Not Supported 00:30:42.311 Memory Page Size Minimum: 4096 bytes 00:30:42.311 Memory Page Size Maximum: 65536 bytes 00:30:42.311 Persistent Memory Region: Not Supported 00:30:42.311 Optional Asynchronous Events Supported 00:30:42.311 Namespace Attribute Notices: Supported 00:30:42.311 Firmware Activation Notices: Not Supported 00:30:42.311 ANA Change Notices: Not Supported 00:30:42.311 PLE Aggregate Log Change Notices: Not Supported 00:30:42.311 LBA Status Info Alert Notices: Not Supported 00:30:42.311 EGE Aggregate Log Change Notices: Not Supported 00:30:42.311 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.311 Zone Descriptor Change Notices: Not Supported 00:30:42.311 Discovery Log Change Notices: Not Supported 00:30:42.311 Controller Attributes 00:30:42.311 128-bit Host Identifier: Not Supported 00:30:42.311 Non-Operational Permissive Mode: Not Supported 00:30:42.311 NVM Sets: Not Supported 00:30:42.311 Read Recovery Levels: Not Supported 00:30:42.311 Endurance Groups: Supported 00:30:42.311 Predictable Latency Mode: Not Supported 00:30:42.311 Traffic Based Keep ALive: Not Supported 00:30:42.311 Namespace Granularity: Not Supported 00:30:42.311 SQ Associations: Not Supported 00:30:42.311 UUID List: Not Supported 00:30:42.311 Multi-Domain Subsystem: Not Supported 00:30:42.311 Fixed Capacity Management: Not Supported 00:30:42.311 Variable Capacity Management: Not Supported 00:30:42.311 Delete Endurance Group: Not Supported 00:30:42.311 Delete NVM Set: Not Supported 00:30:42.311 Extended LBA Formats Supported: Supported 00:30:42.311 Flexible Data Placement Supported: Supported 00:30:42.311 00:30:42.311 Controller Memory Buffer Support 00:30:42.311 ================================ 00:30:42.311 Supported: No 00:30:42.311 00:30:42.311 Persistent Memory Region Support 00:30:42.311 ================================ 00:30:42.311 Supported: No 00:30:42.311 00:30:42.311 Admin Command Set Attributes 00:30:42.311 ============================ 00:30:42.311 Security Send/Receive: Not Supported 00:30:42.311 Format NVM: Supported 00:30:42.311 Firmware Activate/Download: Not Supported 00:30:42.311 Namespace Management: Supported 00:30:42.311 Device Self-Test: Not Supported 00:30:42.311 Directives: Supported 00:30:42.311 NVMe-MI: Not Supported 00:30:42.311 Virtualization Management: Not Supported 00:30:42.311 Doorbell Buffer Config: Supported 00:30:42.311 Get LBA Status Capability: Not Supported 00:30:42.311 Command & Feature Lockdown Capability: Not Supported 00:30:42.311 Abort Command Limit: 4 00:30:42.311 Async Event Request Limit: 4 00:30:42.311 Number of Firmware Slots: N/A 00:30:42.311 Firmware Slot 1 Read-Only: N/A 00:30:42.311 Firmware Activation Without Reset: N/A 00:30:42.311 Multiple Update Detection Support: N/A 00:30:42.311 Firmware Update Granularity: No Information Provided 00:30:42.311 Per-Namespace SMART Log: Yes 00:30:42.311 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.311 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:42.311 Command Effects Log Page: Supported 00:30:42.311 Get Log Page Extended Data: Supported 00:30:42.311 Telemetry Log Pages: Not Supported 00:30:42.311 Persistent Event Log Pages: Not Supported 00:30:42.311 Supported Log Pages Log Page: May Support 00:30:42.311 Commands Supported & Effects Log Page: Not Supported 00:30:42.311 Feature Identifiers & Effects Log Page:May Support 00:30:42.311 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.311 Data Area 4 for Telemetry Log: Not Supported 00:30:42.311 Error Log Page Entries Supported: 1 00:30:42.311 Keep Alive: Not Supported 00:30:42.311 00:30:42.311 NVM Command Set Attributes 00:30:42.311 ========================== 00:30:42.311 Submission Queue Entry Size 00:30:42.311 Max: 64 00:30:42.311 Min: 64 00:30:42.311 Completion Queue Entry Size 00:30:42.311 Max: 16 00:30:42.311 Min: 16 00:30:42.311 Number of Namespaces: 256 00:30:42.311 Compare Command: Supported 00:30:42.311 Write Uncorrectable Command: Not Supported 00:30:42.311 Dataset Management Command: Supported 00:30:42.311 Write Zeroes Command: Supported 00:30:42.311 Set Features Save Field: Supported 00:30:42.311 Reservations: Not Supported 00:30:42.311 Timestamp: Supported 00:30:42.311 Copy: Supported 00:30:42.311 Volatile Write Cache: Present 00:30:42.311 Atomic Write Unit (Normal): 1 00:30:42.311 Atomic Write Unit (PFail): 1 00:30:42.311 Atomic Compare & Write Unit: 1 00:30:42.311 Fused Compare & Write: Not Supported 00:30:42.311 Scatter-Gather List 00:30:42.311 SGL Command Set: Supported 00:30:42.312 SGL Keyed: Not Supported 00:30:42.312 SGL Bit Bucket Descriptor: Not Supported 00:30:42.312 SGL Metadata Pointer: Not Supported 00:30:42.312 Oversized SGL: Not Supported 00:30:42.312 SGL Metadata Address: Not Supported 00:30:42.312 SGL Offset: Not Supported 00:30:42.312 Transport SGL Data Block: Not Supported 00:30:42.312 Replay Protected Memory Block: Not Supported 00:30:42.312 00:30:42.312 Firmware Slot Information 00:30:42.312 ========================= 00:30:42.312 Active slot: 1 00:30:42.312 Slot 1 Firmware Revision: 1.0 00:30:42.312 00:30:42.312 00:30:42.312 Commands Supported and Effects 00:30:42.312 ============================== 00:30:42.312 Admin Commands 00:30:42.312 -------------- 00:30:42.312 Delete I/O Submission Queue (00h): Supported 00:30:42.312 Create I/O Submission Queue (01h): Supported 00:30:42.312 Get Log Page (02h): Supported 00:30:42.312 Delete I/O Completion Queue (04h): Supported 00:30:42.312 Create I/O Completion Queue (05h): Supported 00:30:42.312 Identify (06h): Supported 00:30:42.312 Abort (08h): Supported 00:30:42.312 Set Features (09h): Supported 00:30:42.312 Get Features (0Ah): Supported 00:30:42.312 Asynchronous Event Request (0Ch): Supported 00:30:42.312 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:42.312 Directive Send (19h): Supported 00:30:42.312 Directive Receive (1Ah): Supported 00:30:42.312 Virtualization Management (1Ch): Supported 00:30:42.312 Doorbell Buffer Config (7Ch): Supported 00:30:42.312 Format NVM (80h): Supported LBA-Change 00:30:42.312 I/O Commands 00:30:42.312 ------------ 00:30:42.312 Flush (00h): Supported LBA-Change 00:30:42.312 Write (01h): Supported LBA-Change 00:30:42.312 Read (02h): Supported 00:30:42.312 Compare (05h): Supported 00:30:42.312 Write Zeroes (08h): Supported LBA-Change 00:30:42.312 Dataset Management (09h): Supported LBA-Change 00:30:42.312 Unknown (0Ch): Supported 00:30:42.312 Unknown (12h): Supported 00:30:42.312 Copy (19h): Supported LBA-Change 00:30:42.312 Unknown (1Dh): Supported LBA-Change 00:30:42.312 00:30:42.312 Error Log 00:30:42.312 ========= 00:30:42.312 00:30:42.312 Arbitration 00:30:42.312 =========== 00:30:42.312 Arbitration Burst: no limit 00:30:42.312 00:30:42.312 Power Management 00:30:42.312 ================ 00:30:42.312 Number of Power States: 1 00:30:42.312 Current Power State: Power State #0 00:30:42.312 Power State #0: 00:30:42.312 Max Power: 25.00 W 00:30:42.312 Non-Operational State: Operational 00:30:42.312 Entry Latency: 16 microseconds 00:30:42.312 Exit Latency: 4 microseconds 00:30:42.312 Relative Read Throughput: 0 00:30:42.312 Relative Read Latency: 0 00:30:42.312 Relative Write Throughput: 0 00:30:42.312 Relative Write Latency: 0 00:30:42.312 Idle Power: Not Reported 00:30:42.312 Active Power: Not Reported 00:30:42.312 Non-Operational Permissive Mode: Not Supported 00:30:42.312 00:30:42.312 Health Information 00:30:42.312 ================== 00:30:42.312 Critical Warnings: 00:30:42.312 Available Spare Space: OK 00:30:42.312 Temperature: OK 00:30:42.312 Device Reliability: OK 00:30:42.312 Read Only: No 00:30:42.312 Volatile Memory Backup: OK 00:30:42.312 Current Temperature: 323 Kelvin (50 Celsius) 00:30:42.312 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:42.312 Available Spare: 0% 00:30:42.312 Available Spare Threshold: 0% 00:30:42.312 Life Percentage Used: 0% 00:30:42.312 Data Units Read: 902 00:30:42.312 Data Units Written: 832 00:30:42.312 Host Read Commands: 33288 00:30:42.312 Host Write Commands: 32711 00:30:42.312 Controller Busy Time: 0 minutes 00:30:42.312 Power Cycles: 0 00:30:42.312 Power On Hours: 0 hours 00:30:42.312 Unsafe Shutdowns: 0 00:30:42.312 Unrecoverable Media Errors: 0 00:30:42.312 Lifetime Error Log Entries: 0 00:30:42.312 Warning Temperature Time: 0 minutes 00:30:42.312 Critical Temperature Time: 0 minutes 00:30:42.312 00:30:42.312 Number of Queues 00:30:42.312 ================ 00:30:42.312 Number of I/O Submission Queues: 64 00:30:42.312 Number of I/O Completion Queues: 64 00:30:42.312 00:30:42.312 ZNS Specific Controller Data 00:30:42.312 ============================ 00:30:42.312 Zone Append Size Limit: 0 00:30:42.312 00:30:42.312 00:30:42.312 Active Namespaces 00:30:42.312 ================= 00:30:42.312 Namespace ID:1 00:30:42.312 Error Recovery Timeout: Unlimited 00:30:42.312 Command Set Identifier: NVM (00h) 00:30:42.312 Deallocate: Supported 00:30:42.312 Deallocated/Unwritten Error: Supported 00:30:42.312 Deallocated Read Value: All 0x00 00:30:42.312 Deallocate in Write Zeroes: Not Supported 00:30:42.312 Deallocated Guard Field: 0xFFFF 00:30:42.312 Flush: Supported 00:30:42.312 Reservation: Not Supported 00:30:42.312 Namespace Sharing Capabilities: Multiple Controllers 00:30:42.312 Size (in LBAs): 262144 (1GiB) 00:30:42.312 Capacity (in LBAs): 262144 (1GiB) 00:30:42.312 Utilization (in LBAs): 262144 (1GiB) 00:30:42.312 Thin Provisioning: Not Supported 00:30:42.312 Per-NS Atomic Units: No 00:30:42.312 Maximum Single Source Range Length: 128 00:30:42.312 Maximum Copy Length: 128 00:30:42.312 Maximum Source Range Count: 128 00:30:42.312 NGUID/EUI64 Never Reused: No 00:30:42.312 Namespace Write Protected: No 00:30:42.312 Endurance group ID: 1 00:30:42.312 Number of LBA Formats: 8 00:30:42.312 Current LBA Format: LBA Format #04 00:30:42.312 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.312 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.312 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.312 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.312 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.312 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.312 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.312 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.312 00:30:42.312 Get Feature FDP: 00:30:42.312 ================ 00:30:42.312 Enabled: Yes 00:30:42.312 FDP configuration index: 0 00:30:42.312 00:30:42.312 FDP configurations log page 00:30:42.312 =========================== 00:30:42.312 Number of FDP configurations: 1 00:30:42.312 Version: 0 00:30:42.312 Size: 112 00:30:42.312 FDP Configuration Descriptor: 0 00:30:42.312 Descriptor Size: 96 00:30:42.312 Reclaim Group Identifier format: 2 00:30:42.312 FDP Volatile Write Cache: Not Present 00:30:42.312 FDP Configuration: Valid 00:30:42.312 Vendor Specific Size: 0 00:30:42.312 Number of Reclaim Groups: 2 00:30:42.312 Number of Recalim Unit Handles: 8 00:30:42.312 Max Placement Identifiers: 128 00:30:42.312 Number of Namespaces Suppprted: 256 00:30:42.312 Reclaim unit Nominal Size: 6000000 bytes 00:30:42.312 Estimated Reclaim Unit Time Limit: Not Reported 00:30:42.312 RUH Desc #000: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #001: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #002: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #003: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #004: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #005: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #006: RUH Type: Initially Isolated 00:30:42.312 RUH Desc #007: RUH Type: Initially Isolated 00:30:42.312 00:30:42.312 FDP reclaim unit handle usage log page 00:30:42.312 ====================================== 00:30:42.312 Number of Reclaim Unit Handles: 8 00:30:42.312 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:42.312 RUH Usage Desc #001: RUH Attributes: Unused 00:30:42.312 RUH Usage Desc #002: RUH Attributes: Unused 00:30:42.312 RUH Usage Desc #003: RUH Attributes: Unused 00:30:42.312 RUH Usage Desc #004: RUH Attributes: Unused 00:30:42.312 RUH Usage Desc #005: RUH Attributes: Unused 00:30:42.312 RUH Usage Desc #006: RUH Attributes: Unused 00:30:42.312 RUH Usage Desc #007: RUH Attributes: Unused 00:30:42.312 00:30:42.312 FDP statistics log page 00:30:42.312 ======================= 00:30:42.312 Host bytes with metadata written: 510697472 00:30:42.312 Medi[2024-11-26 17:27:19.700025] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64589 terminated unexpected 00:30:42.312 a bytes with metadata written: 512786432 00:30:42.312 Media bytes erased: 0 00:30:42.312 00:30:42.312 FDP events log page 00:30:42.312 =================== 00:30:42.312 Number of FDP events: 0 00:30:42.312 00:30:42.312 NVM Specific Namespace Data 00:30:42.312 =========================== 00:30:42.312 Logical Block Storage Tag Mask: 0 00:30:42.312 Protection Information Capabilities: 00:30:42.312 16b Guard Protection Information Storage Tag Support: No 00:30:42.312 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.312 Storage Tag Check Read Support: No 00:30:42.312 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.312 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.312 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.312 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.313 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.313 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.313 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.313 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.313 ===================================================== 00:30:42.313 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:42.313 ===================================================== 00:30:42.313 Controller Capabilities/Features 00:30:42.313 ================================ 00:30:42.313 Vendor ID: 1b36 00:30:42.313 Subsystem Vendor ID: 1af4 00:30:42.313 Serial Number: 12342 00:30:42.313 Model Number: QEMU NVMe Ctrl 00:30:42.313 Firmware Version: 8.0.0 00:30:42.313 Recommended Arb Burst: 6 00:30:42.313 IEEE OUI Identifier: 00 54 52 00:30:42.313 Multi-path I/O 00:30:42.313 May have multiple subsystem ports: No 00:30:42.313 May have multiple controllers: No 00:30:42.313 Associated with SR-IOV VF: No 00:30:42.313 Max Data Transfer Size: 524288 00:30:42.313 Max Number of Namespaces: 256 00:30:42.313 Max Number of I/O Queues: 64 00:30:42.313 NVMe Specification Version (VS): 1.4 00:30:42.313 NVMe Specification Version (Identify): 1.4 00:30:42.313 Maximum Queue Entries: 2048 00:30:42.313 Contiguous Queues Required: Yes 00:30:42.313 Arbitration Mechanisms Supported 00:30:42.313 Weighted Round Robin: Not Supported 00:30:42.313 Vendor Specific: Not Supported 00:30:42.313 Reset Timeout: 7500 ms 00:30:42.313 Doorbell Stride: 4 bytes 00:30:42.313 NVM Subsystem Reset: Not Supported 00:30:42.313 Command Sets Supported 00:30:42.313 NVM Command Set: Supported 00:30:42.313 Boot Partition: Not Supported 00:30:42.313 Memory Page Size Minimum: 4096 bytes 00:30:42.313 Memory Page Size Maximum: 65536 bytes 00:30:42.313 Persistent Memory Region: Not Supported 00:30:42.313 Optional Asynchronous Events Supported 00:30:42.313 Namespace Attribute Notices: Supported 00:30:42.313 Firmware Activation Notices: Not Supported 00:30:42.313 ANA Change Notices: Not Supported 00:30:42.313 PLE Aggregate Log Change Notices: Not Supported 00:30:42.313 LBA Status Info Alert Notices: Not Supported 00:30:42.313 EGE Aggregate Log Change Notices: Not Supported 00:30:42.313 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.313 Zone Descriptor Change Notices: Not Supported 00:30:42.313 Discovery Log Change Notices: Not Supported 00:30:42.313 Controller Attributes 00:30:42.313 128-bit Host Identifier: Not Supported 00:30:42.313 Non-Operational Permissive Mode: Not Supported 00:30:42.313 NVM Sets: Not Supported 00:30:42.313 Read Recovery Levels: Not Supported 00:30:42.313 Endurance Groups: Not Supported 00:30:42.313 Predictable Latency Mode: Not Supported 00:30:42.313 Traffic Based Keep ALive: Not Supported 00:30:42.313 Namespace Granularity: Not Supported 00:30:42.313 SQ Associations: Not Supported 00:30:42.313 UUID List: Not Supported 00:30:42.313 Multi-Domain Subsystem: Not Supported 00:30:42.313 Fixed Capacity Management: Not Supported 00:30:42.313 Variable Capacity Management: Not Supported 00:30:42.313 Delete Endurance Group: Not Supported 00:30:42.313 Delete NVM Set: Not Supported 00:30:42.313 Extended LBA Formats Supported: Supported 00:30:42.313 Flexible Data Placement Supported: Not Supported 00:30:42.313 00:30:42.313 Controller Memory Buffer Support 00:30:42.313 ================================ 00:30:42.313 Supported: No 00:30:42.313 00:30:42.313 Persistent Memory Region Support 00:30:42.313 ================================ 00:30:42.313 Supported: No 00:30:42.313 00:30:42.313 Admin Command Set Attributes 00:30:42.313 ============================ 00:30:42.313 Security Send/Receive: Not Supported 00:30:42.313 Format NVM: Supported 00:30:42.313 Firmware Activate/Download: Not Supported 00:30:42.313 Namespace Management: Supported 00:30:42.313 Device Self-Test: Not Supported 00:30:42.313 Directives: Supported 00:30:42.313 NVMe-MI: Not Supported 00:30:42.313 Virtualization Management: Not Supported 00:30:42.313 Doorbell Buffer Config: Supported 00:30:42.313 Get LBA Status Capability: Not Supported 00:30:42.313 Command & Feature Lockdown Capability: Not Supported 00:30:42.313 Abort Command Limit: 4 00:30:42.313 Async Event Request Limit: 4 00:30:42.313 Number of Firmware Slots: N/A 00:30:42.313 Firmware Slot 1 Read-Only: N/A 00:30:42.313 Firmware Activation Without Reset: N/A 00:30:42.313 Multiple Update Detection Support: N/A 00:30:42.313 Firmware Update Granularity: No Information Provided 00:30:42.313 Per-Namespace SMART Log: Yes 00:30:42.313 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.313 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:42.313 Command Effects Log Page: Supported 00:30:42.313 Get Log Page Extended Data: Supported 00:30:42.313 Telemetry Log Pages: Not Supported 00:30:42.313 Persistent Event Log Pages: Not Supported 00:30:42.313 Supported Log Pages Log Page: May Support 00:30:42.313 Commands Supported & Effects Log Page: Not Supported 00:30:42.313 Feature Identifiers & Effects Log Page:May Support 00:30:42.313 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.313 Data Area 4 for Telemetry Log: Not Supported 00:30:42.313 Error Log Page Entries Supported: 1 00:30:42.313 Keep Alive: Not Supported 00:30:42.313 00:30:42.313 NVM Command Set Attributes 00:30:42.313 ========================== 00:30:42.313 Submission Queue Entry Size 00:30:42.313 Max: 64 00:30:42.313 Min: 64 00:30:42.313 Completion Queue Entry Size 00:30:42.313 Max: 16 00:30:42.313 Min: 16 00:30:42.313 Number of Namespaces: 256 00:30:42.313 Compare Command: Supported 00:30:42.313 Write Uncorrectable Command: Not Supported 00:30:42.313 Dataset Management Command: Supported 00:30:42.313 Write Zeroes Command: Supported 00:30:42.313 Set Features Save Field: Supported 00:30:42.313 Reservations: Not Supported 00:30:42.313 Timestamp: Supported 00:30:42.313 Copy: Supported 00:30:42.313 Volatile Write Cache: Present 00:30:42.313 Atomic Write Unit (Normal): 1 00:30:42.313 Atomic Write Unit (PFail): 1 00:30:42.313 Atomic Compare & Write Unit: 1 00:30:42.313 Fused Compare & Write: Not Supported 00:30:42.313 Scatter-Gather List 00:30:42.313 SGL Command Set: Supported 00:30:42.313 SGL Keyed: Not Supported 00:30:42.313 SGL Bit Bucket Descriptor: Not Supported 00:30:42.313 SGL Metadata Pointer: Not Supported 00:30:42.313 Oversized SGL: Not Supported 00:30:42.313 SGL Metadata Address: Not Supported 00:30:42.313 SGL Offset: Not Supported 00:30:42.313 Transport SGL Data Block: Not Supported 00:30:42.313 Replay Protected Memory Block: Not Supported 00:30:42.313 00:30:42.313 Firmware Slot Information 00:30:42.313 ========================= 00:30:42.313 Active slot: 1 00:30:42.313 Slot 1 Firmware Revision: 1.0 00:30:42.313 00:30:42.313 00:30:42.313 Commands Supported and Effects 00:30:42.313 ============================== 00:30:42.313 Admin Commands 00:30:42.313 -------------- 00:30:42.313 Delete I/O Submission Queue (00h): Supported 00:30:42.313 Create I/O Submission Queue (01h): Supported 00:30:42.313 Get Log Page (02h): Supported 00:30:42.313 Delete I/O Completion Queue (04h): Supported 00:30:42.313 Create I/O Completion Queue (05h): Supported 00:30:42.313 Identify (06h): Supported 00:30:42.313 Abort (08h): Supported 00:30:42.313 Set Features (09h): Supported 00:30:42.313 Get Features (0Ah): Supported 00:30:42.313 Asynchronous Event Request (0Ch): Supported 00:30:42.313 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:42.313 Directive Send (19h): Supported 00:30:42.314 Directive Receive (1Ah): Supported 00:30:42.314 Virtualization Management (1Ch): Supported 00:30:42.314 Doorbell Buffer Config (7Ch): Supported 00:30:42.314 Format NVM (80h): Supported LBA-Change 00:30:42.314 I/O Commands 00:30:42.314 ------------ 00:30:42.314 Flush (00h): Supported LBA-Change 00:30:42.314 Write (01h): Supported LBA-Change 00:30:42.314 Read (02h): Supported 00:30:42.314 Compare (05h): Supported 00:30:42.314 Write Zeroes (08h): Supported LBA-Change 00:30:42.314 Dataset Management (09h): Supported LBA-Change 00:30:42.314 Unknown (0Ch): Supported 00:30:42.314 Unknown (12h): Supported 00:30:42.314 Copy (19h): Supported LBA-Change 00:30:42.314 Unknown (1Dh): Supported LBA-Change 00:30:42.314 00:30:42.314 Error Log 00:30:42.314 ========= 00:30:42.314 00:30:42.314 Arbitration 00:30:42.314 =========== 00:30:42.314 Arbitration Burst: no limit 00:30:42.314 00:30:42.314 Power Management 00:30:42.314 ================ 00:30:42.314 Number of Power States: 1 00:30:42.314 Current Power State: Power State #0 00:30:42.314 Power State #0: 00:30:42.314 Max Power: 25.00 W 00:30:42.314 Non-Operational State: Operational 00:30:42.314 Entry Latency: 16 microseconds 00:30:42.314 Exit Latency: 4 microseconds 00:30:42.314 Relative Read Throughput: 0 00:30:42.314 Relative Read Latency: 0 00:30:42.314 Relative Write Throughput: 0 00:30:42.314 Relative Write Latency: 0 00:30:42.314 Idle Power: Not Reported 00:30:42.314 Active Power: Not Reported 00:30:42.314 Non-Operational Permissive Mode: Not Supported 00:30:42.314 00:30:42.314 Health Information 00:30:42.314 ================== 00:30:42.314 Critical Warnings: 00:30:42.314 Available Spare Space: OK 00:30:42.314 Temperature: OK 00:30:42.314 Device Reliability: OK 00:30:42.314 Read Only: No 00:30:42.314 Volatile Memory Backup: OK 00:30:42.314 Current Temperature: 323 Kelvin (50 Celsius) 00:30:42.314 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:42.314 Available Spare: 0% 00:30:42.314 Available Spare Threshold: 0% 00:30:42.314 Life Percentage Used: 0% 00:30:42.314 Data Units Read: 2222 00:30:42.314 Data Units Written: 2009 00:30:42.314 Host Read Commands: 96016 00:30:42.314 Host Write Commands: 94285 00:30:42.314 Controller Busy Time: 0 minutes 00:30:42.314 Power Cycles: 0 00:30:42.314 Power On Hours: 0 hours 00:30:42.314 Unsafe Shutdowns: 0 00:30:42.314 Unrecoverable Media Errors: 0 00:30:42.314 Lifetime Error Log Entries: 0 00:30:42.314 Warning Temperature Time: 0 minutes 00:30:42.314 Critical Temperature Time: 0 minutes 00:30:42.314 00:30:42.314 Number of Queues 00:30:42.314 ================ 00:30:42.314 Number of I/O Submission Queues: 64 00:30:42.314 Number of I/O Completion Queues: 64 00:30:42.314 00:30:42.314 ZNS Specific Controller Data 00:30:42.314 ============================ 00:30:42.314 Zone Append Size Limit: 0 00:30:42.314 00:30:42.314 00:30:42.314 Active Namespaces 00:30:42.314 ================= 00:30:42.314 Namespace ID:1 00:30:42.314 Error Recovery Timeout: Unlimited 00:30:42.314 Command Set Identifier: NVM (00h) 00:30:42.314 Deallocate: Supported 00:30:42.314 Deallocated/Unwritten Error: Supported 00:30:42.314 Deallocated Read Value: All 0x00 00:30:42.314 Deallocate in Write Zeroes: Not Supported 00:30:42.314 Deallocated Guard Field: 0xFFFF 00:30:42.314 Flush: Supported 00:30:42.314 Reservation: Not Supported 00:30:42.314 Namespace Sharing Capabilities: Private 00:30:42.314 Size (in LBAs): 1048576 (4GiB) 00:30:42.314 Capacity (in LBAs): 1048576 (4GiB) 00:30:42.314 Utilization (in LBAs): 1048576 (4GiB) 00:30:42.314 Thin Provisioning: Not Supported 00:30:42.314 Per-NS Atomic Units: No 00:30:42.314 Maximum Single Source Range Length: 128 00:30:42.314 Maximum Copy Length: 128 00:30:42.314 Maximum Source Range Count: 128 00:30:42.314 NGUID/EUI64 Never Reused: No 00:30:42.314 Namespace Write Protected: No 00:30:42.314 Number of LBA Formats: 8 00:30:42.314 Current LBA Format: LBA Format #04 00:30:42.314 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.314 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.314 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.314 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.314 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.314 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.314 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.314 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.314 00:30:42.314 NVM Specific Namespace Data 00:30:42.314 =========================== 00:30:42.314 Logical Block Storage Tag Mask: 0 00:30:42.314 Protection Information Capabilities: 00:30:42.314 16b Guard Protection Information Storage Tag Support: No 00:30:42.314 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.314 Storage Tag Check Read Support: No 00:30:42.314 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Namespace ID:2 00:30:42.314 Error Recovery Timeout: Unlimited 00:30:42.314 Command Set Identifier: NVM (00h) 00:30:42.314 Deallocate: Supported 00:30:42.314 Deallocated/Unwritten Error: Supported 00:30:42.314 Deallocated Read Value: All 0x00 00:30:42.314 Deallocate in Write Zeroes: Not Supported 00:30:42.314 Deallocated Guard Field: 0xFFFF 00:30:42.314 Flush: Supported 00:30:42.314 Reservation: Not Supported 00:30:42.314 Namespace Sharing Capabilities: Private 00:30:42.314 Size (in LBAs): 1048576 (4GiB) 00:30:42.314 Capacity (in LBAs): 1048576 (4GiB) 00:30:42.314 Utilization (in LBAs): 1048576 (4GiB) 00:30:42.314 Thin Provisioning: Not Supported 00:30:42.314 Per-NS Atomic Units: No 00:30:42.314 Maximum Single Source Range Length: 128 00:30:42.314 Maximum Copy Length: 128 00:30:42.314 Maximum Source Range Count: 128 00:30:42.314 NGUID/EUI64 Never Reused: No 00:30:42.314 Namespace Write Protected: No 00:30:42.314 Number of LBA Formats: 8 00:30:42.314 Current LBA Format: LBA Format #04 00:30:42.314 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.314 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.314 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.314 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.314 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.314 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.314 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.314 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.314 00:30:42.314 NVM Specific Namespace Data 00:30:42.314 =========================== 00:30:42.314 Logical Block Storage Tag Mask: 0 00:30:42.314 Protection Information Capabilities: 00:30:42.314 16b Guard Protection Information Storage Tag Support: No 00:30:42.314 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.314 Storage Tag Check Read Support: No 00:30:42.314 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.314 Namespace ID:3 00:30:42.314 Error Recovery Timeout: Unlimited 00:30:42.314 Command Set Identifier: NVM (00h) 00:30:42.314 Deallocate: Supported 00:30:42.314 Deallocated/Unwritten Error: Supported 00:30:42.314 Deallocated Read Value: All 0x00 00:30:42.314 Deallocate in Write Zeroes: Not Supported 00:30:42.314 Deallocated Guard Field: 0xFFFF 00:30:42.314 Flush: Supported 00:30:42.314 Reservation: Not Supported 00:30:42.314 Namespace Sharing Capabilities: Private 00:30:42.314 Size (in LBAs): 1048576 (4GiB) 00:30:42.574 Capacity (in LBAs): 1048576 (4GiB) 00:30:42.574 Utilization (in LBAs): 1048576 (4GiB) 00:30:42.574 Thin Provisioning: Not Supported 00:30:42.574 Per-NS Atomic Units: No 00:30:42.574 Maximum Single Source Range Length: 128 00:30:42.574 Maximum Copy Length: 128 00:30:42.574 Maximum Source Range Count: 128 00:30:42.574 NGUID/EUI64 Never Reused: No 00:30:42.574 Namespace Write Protected: No 00:30:42.574 Number of LBA Formats: 8 00:30:42.574 Current LBA Format: LBA Format #04 00:30:42.574 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.574 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.574 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.574 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.574 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.574 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.574 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.574 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.574 00:30:42.574 NVM Specific Namespace Data 00:30:42.574 =========================== 00:30:42.574 Logical Block Storage Tag Mask: 0 00:30:42.574 Protection Information Capabilities: 00:30:42.574 16b Guard Protection Information Storage Tag Support: No 00:30:42.574 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.574 Storage Tag Check Read Support: No 00:30:42.574 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.574 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:42.574 17:27:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:42.834 ===================================================== 00:30:42.834 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:42.834 ===================================================== 00:30:42.834 Controller Capabilities/Features 00:30:42.834 ================================ 00:30:42.834 Vendor ID: 1b36 00:30:42.834 Subsystem Vendor ID: 1af4 00:30:42.834 Serial Number: 12340 00:30:42.834 Model Number: QEMU NVMe Ctrl 00:30:42.834 Firmware Version: 8.0.0 00:30:42.834 Recommended Arb Burst: 6 00:30:42.834 IEEE OUI Identifier: 00 54 52 00:30:42.834 Multi-path I/O 00:30:42.834 May have multiple subsystem ports: No 00:30:42.834 May have multiple controllers: No 00:30:42.834 Associated with SR-IOV VF: No 00:30:42.834 Max Data Transfer Size: 524288 00:30:42.834 Max Number of Namespaces: 256 00:30:42.834 Max Number of I/O Queues: 64 00:30:42.834 NVMe Specification Version (VS): 1.4 00:30:42.834 NVMe Specification Version (Identify): 1.4 00:30:42.834 Maximum Queue Entries: 2048 00:30:42.834 Contiguous Queues Required: Yes 00:30:42.834 Arbitration Mechanisms Supported 00:30:42.834 Weighted Round Robin: Not Supported 00:30:42.834 Vendor Specific: Not Supported 00:30:42.834 Reset Timeout: 7500 ms 00:30:42.834 Doorbell Stride: 4 bytes 00:30:42.834 NVM Subsystem Reset: Not Supported 00:30:42.834 Command Sets Supported 00:30:42.834 NVM Command Set: Supported 00:30:42.834 Boot Partition: Not Supported 00:30:42.834 Memory Page Size Minimum: 4096 bytes 00:30:42.834 Memory Page Size Maximum: 65536 bytes 00:30:42.834 Persistent Memory Region: Not Supported 00:30:42.834 Optional Asynchronous Events Supported 00:30:42.834 Namespace Attribute Notices: Supported 00:30:42.834 Firmware Activation Notices: Not Supported 00:30:42.834 ANA Change Notices: Not Supported 00:30:42.834 PLE Aggregate Log Change Notices: Not Supported 00:30:42.834 LBA Status Info Alert Notices: Not Supported 00:30:42.834 EGE Aggregate Log Change Notices: Not Supported 00:30:42.834 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.834 Zone Descriptor Change Notices: Not Supported 00:30:42.834 Discovery Log Change Notices: Not Supported 00:30:42.834 Controller Attributes 00:30:42.834 128-bit Host Identifier: Not Supported 00:30:42.834 Non-Operational Permissive Mode: Not Supported 00:30:42.834 NVM Sets: Not Supported 00:30:42.834 Read Recovery Levels: Not Supported 00:30:42.834 Endurance Groups: Not Supported 00:30:42.834 Predictable Latency Mode: Not Supported 00:30:42.834 Traffic Based Keep ALive: Not Supported 00:30:42.834 Namespace Granularity: Not Supported 00:30:42.834 SQ Associations: Not Supported 00:30:42.834 UUID List: Not Supported 00:30:42.834 Multi-Domain Subsystem: Not Supported 00:30:42.834 Fixed Capacity Management: Not Supported 00:30:42.834 Variable Capacity Management: Not Supported 00:30:42.834 Delete Endurance Group: Not Supported 00:30:42.834 Delete NVM Set: Not Supported 00:30:42.834 Extended LBA Formats Supported: Supported 00:30:42.835 Flexible Data Placement Supported: Not Supported 00:30:42.835 00:30:42.835 Controller Memory Buffer Support 00:30:42.835 ================================ 00:30:42.835 Supported: No 00:30:42.835 00:30:42.835 Persistent Memory Region Support 00:30:42.835 ================================ 00:30:42.835 Supported: No 00:30:42.835 00:30:42.835 Admin Command Set Attributes 00:30:42.835 ============================ 00:30:42.835 Security Send/Receive: Not Supported 00:30:42.835 Format NVM: Supported 00:30:42.835 Firmware Activate/Download: Not Supported 00:30:42.835 Namespace Management: Supported 00:30:42.835 Device Self-Test: Not Supported 00:30:42.835 Directives: Supported 00:30:42.835 NVMe-MI: Not Supported 00:30:42.835 Virtualization Management: Not Supported 00:30:42.835 Doorbell Buffer Config: Supported 00:30:42.835 Get LBA Status Capability: Not Supported 00:30:42.835 Command & Feature Lockdown Capability: Not Supported 00:30:42.835 Abort Command Limit: 4 00:30:42.835 Async Event Request Limit: 4 00:30:42.835 Number of Firmware Slots: N/A 00:30:42.835 Firmware Slot 1 Read-Only: N/A 00:30:42.835 Firmware Activation Without Reset: N/A 00:30:42.835 Multiple Update Detection Support: N/A 00:30:42.835 Firmware Update Granularity: No Information Provided 00:30:42.835 Per-Namespace SMART Log: Yes 00:30:42.835 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.835 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:42.835 Command Effects Log Page: Supported 00:30:42.835 Get Log Page Extended Data: Supported 00:30:42.835 Telemetry Log Pages: Not Supported 00:30:42.835 Persistent Event Log Pages: Not Supported 00:30:42.835 Supported Log Pages Log Page: May Support 00:30:42.835 Commands Supported & Effects Log Page: Not Supported 00:30:42.835 Feature Identifiers & Effects Log Page:May Support 00:30:42.835 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.835 Data Area 4 for Telemetry Log: Not Supported 00:30:42.835 Error Log Page Entries Supported: 1 00:30:42.835 Keep Alive: Not Supported 00:30:42.835 00:30:42.835 NVM Command Set Attributes 00:30:42.835 ========================== 00:30:42.835 Submission Queue Entry Size 00:30:42.835 Max: 64 00:30:42.835 Min: 64 00:30:42.835 Completion Queue Entry Size 00:30:42.835 Max: 16 00:30:42.835 Min: 16 00:30:42.835 Number of Namespaces: 256 00:30:42.835 Compare Command: Supported 00:30:42.835 Write Uncorrectable Command: Not Supported 00:30:42.835 Dataset Management Command: Supported 00:30:42.835 Write Zeroes Command: Supported 00:30:42.835 Set Features Save Field: Supported 00:30:42.835 Reservations: Not Supported 00:30:42.835 Timestamp: Supported 00:30:42.835 Copy: Supported 00:30:42.835 Volatile Write Cache: Present 00:30:42.835 Atomic Write Unit (Normal): 1 00:30:42.835 Atomic Write Unit (PFail): 1 00:30:42.835 Atomic Compare & Write Unit: 1 00:30:42.835 Fused Compare & Write: Not Supported 00:30:42.835 Scatter-Gather List 00:30:42.835 SGL Command Set: Supported 00:30:42.835 SGL Keyed: Not Supported 00:30:42.835 SGL Bit Bucket Descriptor: Not Supported 00:30:42.835 SGL Metadata Pointer: Not Supported 00:30:42.835 Oversized SGL: Not Supported 00:30:42.835 SGL Metadata Address: Not Supported 00:30:42.835 SGL Offset: Not Supported 00:30:42.835 Transport SGL Data Block: Not Supported 00:30:42.835 Replay Protected Memory Block: Not Supported 00:30:42.835 00:30:42.835 Firmware Slot Information 00:30:42.835 ========================= 00:30:42.835 Active slot: 1 00:30:42.835 Slot 1 Firmware Revision: 1.0 00:30:42.835 00:30:42.835 00:30:42.835 Commands Supported and Effects 00:30:42.835 ============================== 00:30:42.835 Admin Commands 00:30:42.835 -------------- 00:30:42.835 Delete I/O Submission Queue (00h): Supported 00:30:42.835 Create I/O Submission Queue (01h): Supported 00:30:42.835 Get Log Page (02h): Supported 00:30:42.835 Delete I/O Completion Queue (04h): Supported 00:30:42.835 Create I/O Completion Queue (05h): Supported 00:30:42.835 Identify (06h): Supported 00:30:42.835 Abort (08h): Supported 00:30:42.835 Set Features (09h): Supported 00:30:42.835 Get Features (0Ah): Supported 00:30:42.835 Asynchronous Event Request (0Ch): Supported 00:30:42.835 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:42.835 Directive Send (19h): Supported 00:30:42.835 Directive Receive (1Ah): Supported 00:30:42.835 Virtualization Management (1Ch): Supported 00:30:42.835 Doorbell Buffer Config (7Ch): Supported 00:30:42.835 Format NVM (80h): Supported LBA-Change 00:30:42.835 I/O Commands 00:30:42.835 ------------ 00:30:42.835 Flush (00h): Supported LBA-Change 00:30:42.835 Write (01h): Supported LBA-Change 00:30:42.835 Read (02h): Supported 00:30:42.835 Compare (05h): Supported 00:30:42.835 Write Zeroes (08h): Supported LBA-Change 00:30:42.835 Dataset Management (09h): Supported LBA-Change 00:30:42.835 Unknown (0Ch): Supported 00:30:42.835 Unknown (12h): Supported 00:30:42.835 Copy (19h): Supported LBA-Change 00:30:42.835 Unknown (1Dh): Supported LBA-Change 00:30:42.835 00:30:42.835 Error Log 00:30:42.835 ========= 00:30:42.835 00:30:42.835 Arbitration 00:30:42.835 =========== 00:30:42.835 Arbitration Burst: no limit 00:30:42.835 00:30:42.835 Power Management 00:30:42.835 ================ 00:30:42.835 Number of Power States: 1 00:30:42.835 Current Power State: Power State #0 00:30:42.835 Power State #0: 00:30:42.835 Max Power: 25.00 W 00:30:42.835 Non-Operational State: Operational 00:30:42.835 Entry Latency: 16 microseconds 00:30:42.835 Exit Latency: 4 microseconds 00:30:42.835 Relative Read Throughput: 0 00:30:42.835 Relative Read Latency: 0 00:30:42.835 Relative Write Throughput: 0 00:30:42.835 Relative Write Latency: 0 00:30:42.835 Idle Power: Not Reported 00:30:42.835 Active Power: Not Reported 00:30:42.835 Non-Operational Permissive Mode: Not Supported 00:30:42.835 00:30:42.835 Health Information 00:30:42.835 ================== 00:30:42.835 Critical Warnings: 00:30:42.835 Available Spare Space: OK 00:30:42.835 Temperature: OK 00:30:42.835 Device Reliability: OK 00:30:42.835 Read Only: No 00:30:42.835 Volatile Memory Backup: OK 00:30:42.835 Current Temperature: 323 Kelvin (50 Celsius) 00:30:42.835 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:42.835 Available Spare: 0% 00:30:42.835 Available Spare Threshold: 0% 00:30:42.835 Life Percentage Used: 0% 00:30:42.835 Data Units Read: 685 00:30:42.835 Data Units Written: 613 00:30:42.835 Host Read Commands: 31413 00:30:42.835 Host Write Commands: 31199 00:30:42.835 Controller Busy Time: 0 minutes 00:30:42.835 Power Cycles: 0 00:30:42.835 Power On Hours: 0 hours 00:30:42.835 Unsafe Shutdowns: 0 00:30:42.835 Unrecoverable Media Errors: 0 00:30:42.835 Lifetime Error Log Entries: 0 00:30:42.835 Warning Temperature Time: 0 minutes 00:30:42.835 Critical Temperature Time: 0 minutes 00:30:42.835 00:30:42.835 Number of Queues 00:30:42.835 ================ 00:30:42.835 Number of I/O Submission Queues: 64 00:30:42.835 Number of I/O Completion Queues: 64 00:30:42.835 00:30:42.835 ZNS Specific Controller Data 00:30:42.835 ============================ 00:30:42.835 Zone Append Size Limit: 0 00:30:42.835 00:30:42.835 00:30:42.835 Active Namespaces 00:30:42.835 ================= 00:30:42.835 Namespace ID:1 00:30:42.835 Error Recovery Timeout: Unlimited 00:30:42.835 Command Set Identifier: NVM (00h) 00:30:42.835 Deallocate: Supported 00:30:42.835 Deallocated/Unwritten Error: Supported 00:30:42.835 Deallocated Read Value: All 0x00 00:30:42.835 Deallocate in Write Zeroes: Not Supported 00:30:42.835 Deallocated Guard Field: 0xFFFF 00:30:42.835 Flush: Supported 00:30:42.835 Reservation: Not Supported 00:30:42.835 Metadata Transferred as: Separate Metadata Buffer 00:30:42.835 Namespace Sharing Capabilities: Private 00:30:42.835 Size (in LBAs): 1548666 (5GiB) 00:30:42.835 Capacity (in LBAs): 1548666 (5GiB) 00:30:42.835 Utilization (in LBAs): 1548666 (5GiB) 00:30:42.835 Thin Provisioning: Not Supported 00:30:42.835 Per-NS Atomic Units: No 00:30:42.835 Maximum Single Source Range Length: 128 00:30:42.835 Maximum Copy Length: 128 00:30:42.835 Maximum Source Range Count: 128 00:30:42.835 NGUID/EUI64 Never Reused: No 00:30:42.835 Namespace Write Protected: No 00:30:42.835 Number of LBA Formats: 8 00:30:42.835 Current LBA Format: LBA Format #07 00:30:42.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.835 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:42.835 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:42.835 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:42.835 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:42.835 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:42.835 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:42.835 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:42.835 00:30:42.836 NVM Specific Namespace Data 00:30:42.836 =========================== 00:30:42.836 Logical Block Storage Tag Mask: 0 00:30:42.836 Protection Information Capabilities: 00:30:42.836 16b Guard Protection Information Storage Tag Support: No 00:30:42.836 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:42.836 Storage Tag Check Read Support: No 00:30:42.836 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:42.836 17:27:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:42.836 17:27:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:30:43.112 ===================================================== 00:30:43.112 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:43.112 ===================================================== 00:30:43.112 Controller Capabilities/Features 00:30:43.112 ================================ 00:30:43.112 Vendor ID: 1b36 00:30:43.112 Subsystem Vendor ID: 1af4 00:30:43.112 Serial Number: 12341 00:30:43.112 Model Number: QEMU NVMe Ctrl 00:30:43.112 Firmware Version: 8.0.0 00:30:43.112 Recommended Arb Burst: 6 00:30:43.112 IEEE OUI Identifier: 00 54 52 00:30:43.112 Multi-path I/O 00:30:43.112 May have multiple subsystem ports: No 00:30:43.112 May have multiple controllers: No 00:30:43.112 Associated with SR-IOV VF: No 00:30:43.112 Max Data Transfer Size: 524288 00:30:43.112 Max Number of Namespaces: 256 00:30:43.112 Max Number of I/O Queues: 64 00:30:43.112 NVMe Specification Version (VS): 1.4 00:30:43.112 NVMe Specification Version (Identify): 1.4 00:30:43.112 Maximum Queue Entries: 2048 00:30:43.112 Contiguous Queues Required: Yes 00:30:43.112 Arbitration Mechanisms Supported 00:30:43.112 Weighted Round Robin: Not Supported 00:30:43.112 Vendor Specific: Not Supported 00:30:43.112 Reset Timeout: 7500 ms 00:30:43.112 Doorbell Stride: 4 bytes 00:30:43.112 NVM Subsystem Reset: Not Supported 00:30:43.112 Command Sets Supported 00:30:43.112 NVM Command Set: Supported 00:30:43.112 Boot Partition: Not Supported 00:30:43.112 Memory Page Size Minimum: 4096 bytes 00:30:43.112 Memory Page Size Maximum: 65536 bytes 00:30:43.112 Persistent Memory Region: Not Supported 00:30:43.112 Optional Asynchronous Events Supported 00:30:43.112 Namespace Attribute Notices: Supported 00:30:43.112 Firmware Activation Notices: Not Supported 00:30:43.112 ANA Change Notices: Not Supported 00:30:43.112 PLE Aggregate Log Change Notices: Not Supported 00:30:43.112 LBA Status Info Alert Notices: Not Supported 00:30:43.112 EGE Aggregate Log Change Notices: Not Supported 00:30:43.112 Normal NVM Subsystem Shutdown event: Not Supported 00:30:43.112 Zone Descriptor Change Notices: Not Supported 00:30:43.112 Discovery Log Change Notices: Not Supported 00:30:43.112 Controller Attributes 00:30:43.112 128-bit Host Identifier: Not Supported 00:30:43.112 Non-Operational Permissive Mode: Not Supported 00:30:43.112 NVM Sets: Not Supported 00:30:43.112 Read Recovery Levels: Not Supported 00:30:43.112 Endurance Groups: Not Supported 00:30:43.112 Predictable Latency Mode: Not Supported 00:30:43.112 Traffic Based Keep ALive: Not Supported 00:30:43.112 Namespace Granularity: Not Supported 00:30:43.112 SQ Associations: Not Supported 00:30:43.112 UUID List: Not Supported 00:30:43.112 Multi-Domain Subsystem: Not Supported 00:30:43.112 Fixed Capacity Management: Not Supported 00:30:43.112 Variable Capacity Management: Not Supported 00:30:43.112 Delete Endurance Group: Not Supported 00:30:43.112 Delete NVM Set: Not Supported 00:30:43.112 Extended LBA Formats Supported: Supported 00:30:43.112 Flexible Data Placement Supported: Not Supported 00:30:43.112 00:30:43.112 Controller Memory Buffer Support 00:30:43.112 ================================ 00:30:43.112 Supported: No 00:30:43.112 00:30:43.112 Persistent Memory Region Support 00:30:43.112 ================================ 00:30:43.112 Supported: No 00:30:43.112 00:30:43.112 Admin Command Set Attributes 00:30:43.112 ============================ 00:30:43.113 Security Send/Receive: Not Supported 00:30:43.113 Format NVM: Supported 00:30:43.113 Firmware Activate/Download: Not Supported 00:30:43.113 Namespace Management: Supported 00:30:43.113 Device Self-Test: Not Supported 00:30:43.113 Directives: Supported 00:30:43.113 NVMe-MI: Not Supported 00:30:43.113 Virtualization Management: Not Supported 00:30:43.113 Doorbell Buffer Config: Supported 00:30:43.113 Get LBA Status Capability: Not Supported 00:30:43.113 Command & Feature Lockdown Capability: Not Supported 00:30:43.113 Abort Command Limit: 4 00:30:43.113 Async Event Request Limit: 4 00:30:43.113 Number of Firmware Slots: N/A 00:30:43.113 Firmware Slot 1 Read-Only: N/A 00:30:43.113 Firmware Activation Without Reset: N/A 00:30:43.113 Multiple Update Detection Support: N/A 00:30:43.113 Firmware Update Granularity: No Information Provided 00:30:43.113 Per-Namespace SMART Log: Yes 00:30:43.113 Asymmetric Namespace Access Log Page: Not Supported 00:30:43.113 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:43.113 Command Effects Log Page: Supported 00:30:43.113 Get Log Page Extended Data: Supported 00:30:43.113 Telemetry Log Pages: Not Supported 00:30:43.113 Persistent Event Log Pages: Not Supported 00:30:43.113 Supported Log Pages Log Page: May Support 00:30:43.113 Commands Supported & Effects Log Page: Not Supported 00:30:43.113 Feature Identifiers & Effects Log Page:May Support 00:30:43.113 NVMe-MI Commands & Effects Log Page: May Support 00:30:43.113 Data Area 4 for Telemetry Log: Not Supported 00:30:43.113 Error Log Page Entries Supported: 1 00:30:43.113 Keep Alive: Not Supported 00:30:43.113 00:30:43.113 NVM Command Set Attributes 00:30:43.113 ========================== 00:30:43.113 Submission Queue Entry Size 00:30:43.113 Max: 64 00:30:43.113 Min: 64 00:30:43.113 Completion Queue Entry Size 00:30:43.113 Max: 16 00:30:43.113 Min: 16 00:30:43.113 Number of Namespaces: 256 00:30:43.113 Compare Command: Supported 00:30:43.113 Write Uncorrectable Command: Not Supported 00:30:43.113 Dataset Management Command: Supported 00:30:43.113 Write Zeroes Command: Supported 00:30:43.113 Set Features Save Field: Supported 00:30:43.113 Reservations: Not Supported 00:30:43.113 Timestamp: Supported 00:30:43.113 Copy: Supported 00:30:43.113 Volatile Write Cache: Present 00:30:43.113 Atomic Write Unit (Normal): 1 00:30:43.113 Atomic Write Unit (PFail): 1 00:30:43.113 Atomic Compare & Write Unit: 1 00:30:43.113 Fused Compare & Write: Not Supported 00:30:43.113 Scatter-Gather List 00:30:43.113 SGL Command Set: Supported 00:30:43.113 SGL Keyed: Not Supported 00:30:43.113 SGL Bit Bucket Descriptor: Not Supported 00:30:43.113 SGL Metadata Pointer: Not Supported 00:30:43.113 Oversized SGL: Not Supported 00:30:43.113 SGL Metadata Address: Not Supported 00:30:43.113 SGL Offset: Not Supported 00:30:43.113 Transport SGL Data Block: Not Supported 00:30:43.113 Replay Protected Memory Block: Not Supported 00:30:43.113 00:30:43.113 Firmware Slot Information 00:30:43.113 ========================= 00:30:43.113 Active slot: 1 00:30:43.113 Slot 1 Firmware Revision: 1.0 00:30:43.113 00:30:43.113 00:30:43.113 Commands Supported and Effects 00:30:43.113 ============================== 00:30:43.113 Admin Commands 00:30:43.113 -------------- 00:30:43.113 Delete I/O Submission Queue (00h): Supported 00:30:43.113 Create I/O Submission Queue (01h): Supported 00:30:43.113 Get Log Page (02h): Supported 00:30:43.113 Delete I/O Completion Queue (04h): Supported 00:30:43.113 Create I/O Completion Queue (05h): Supported 00:30:43.113 Identify (06h): Supported 00:30:43.113 Abort (08h): Supported 00:30:43.113 Set Features (09h): Supported 00:30:43.113 Get Features (0Ah): Supported 00:30:43.113 Asynchronous Event Request (0Ch): Supported 00:30:43.113 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:43.113 Directive Send (19h): Supported 00:30:43.113 Directive Receive (1Ah): Supported 00:30:43.113 Virtualization Management (1Ch): Supported 00:30:43.113 Doorbell Buffer Config (7Ch): Supported 00:30:43.113 Format NVM (80h): Supported LBA-Change 00:30:43.113 I/O Commands 00:30:43.113 ------------ 00:30:43.113 Flush (00h): Supported LBA-Change 00:30:43.113 Write (01h): Supported LBA-Change 00:30:43.113 Read (02h): Supported 00:30:43.113 Compare (05h): Supported 00:30:43.113 Write Zeroes (08h): Supported LBA-Change 00:30:43.113 Dataset Management (09h): Supported LBA-Change 00:30:43.113 Unknown (0Ch): Supported 00:30:43.113 Unknown (12h): Supported 00:30:43.113 Copy (19h): Supported LBA-Change 00:30:43.113 Unknown (1Dh): Supported LBA-Change 00:30:43.113 00:30:43.113 Error Log 00:30:43.113 ========= 00:30:43.113 00:30:43.113 Arbitration 00:30:43.113 =========== 00:30:43.113 Arbitration Burst: no limit 00:30:43.113 00:30:43.113 Power Management 00:30:43.113 ================ 00:30:43.113 Number of Power States: 1 00:30:43.113 Current Power State: Power State #0 00:30:43.113 Power State #0: 00:30:43.113 Max Power: 25.00 W 00:30:43.113 Non-Operational State: Operational 00:30:43.113 Entry Latency: 16 microseconds 00:30:43.113 Exit Latency: 4 microseconds 00:30:43.113 Relative Read Throughput: 0 00:30:43.113 Relative Read Latency: 0 00:30:43.113 Relative Write Throughput: 0 00:30:43.113 Relative Write Latency: 0 00:30:43.113 Idle Power: Not Reported 00:30:43.113 Active Power: Not Reported 00:30:43.113 Non-Operational Permissive Mode: Not Supported 00:30:43.113 00:30:43.113 Health Information 00:30:43.113 ================== 00:30:43.113 Critical Warnings: 00:30:43.113 Available Spare Space: OK 00:30:43.113 Temperature: OK 00:30:43.113 Device Reliability: OK 00:30:43.113 Read Only: No 00:30:43.113 Volatile Memory Backup: OK 00:30:43.113 Current Temperature: 323 Kelvin (50 Celsius) 00:30:43.113 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:43.113 Available Spare: 0% 00:30:43.113 Available Spare Threshold: 0% 00:30:43.113 Life Percentage Used: 0% 00:30:43.113 Data Units Read: 1032 00:30:43.113 Data Units Written: 893 00:30:43.113 Host Read Commands: 45862 00:30:43.113 Host Write Commands: 44549 00:30:43.113 Controller Busy Time: 0 minutes 00:30:43.113 Power Cycles: 0 00:30:43.113 Power On Hours: 0 hours 00:30:43.113 Unsafe Shutdowns: 0 00:30:43.113 Unrecoverable Media Errors: 0 00:30:43.113 Lifetime Error Log Entries: 0 00:30:43.113 Warning Temperature Time: 0 minutes 00:30:43.113 Critical Temperature Time: 0 minutes 00:30:43.113 00:30:43.113 Number of Queues 00:30:43.113 ================ 00:30:43.113 Number of I/O Submission Queues: 64 00:30:43.113 Number of I/O Completion Queues: 64 00:30:43.113 00:30:43.113 ZNS Specific Controller Data 00:30:43.113 ============================ 00:30:43.113 Zone Append Size Limit: 0 00:30:43.113 00:30:43.113 00:30:43.113 Active Namespaces 00:30:43.113 ================= 00:30:43.113 Namespace ID:1 00:30:43.113 Error Recovery Timeout: Unlimited 00:30:43.113 Command Set Identifier: NVM (00h) 00:30:43.113 Deallocate: Supported 00:30:43.113 Deallocated/Unwritten Error: Supported 00:30:43.113 Deallocated Read Value: All 0x00 00:30:43.113 Deallocate in Write Zeroes: Not Supported 00:30:43.113 Deallocated Guard Field: 0xFFFF 00:30:43.113 Flush: Supported 00:30:43.113 Reservation: Not Supported 00:30:43.113 Namespace Sharing Capabilities: Private 00:30:43.113 Size (in LBAs): 1310720 (5GiB) 00:30:43.113 Capacity (in LBAs): 1310720 (5GiB) 00:30:43.113 Utilization (in LBAs): 1310720 (5GiB) 00:30:43.113 Thin Provisioning: Not Supported 00:30:43.113 Per-NS Atomic Units: No 00:30:43.113 Maximum Single Source Range Length: 128 00:30:43.113 Maximum Copy Length: 128 00:30:43.113 Maximum Source Range Count: 128 00:30:43.113 NGUID/EUI64 Never Reused: No 00:30:43.113 Namespace Write Protected: No 00:30:43.113 Number of LBA Formats: 8 00:30:43.113 Current LBA Format: LBA Format #04 00:30:43.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:43.113 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:43.113 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:43.113 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:43.113 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:43.113 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:43.113 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:43.113 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:43.113 00:30:43.113 NVM Specific Namespace Data 00:30:43.113 =========================== 00:30:43.113 Logical Block Storage Tag Mask: 0 00:30:43.113 Protection Information Capabilities: 00:30:43.113 16b Guard Protection Information Storage Tag Support: No 00:30:43.113 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:43.113 Storage Tag Check Read Support: No 00:30:43.113 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.113 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.114 17:27:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:43.114 17:27:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:30:43.374 ===================================================== 00:30:43.374 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:43.374 ===================================================== 00:30:43.374 Controller Capabilities/Features 00:30:43.374 ================================ 00:30:43.374 Vendor ID: 1b36 00:30:43.374 Subsystem Vendor ID: 1af4 00:30:43.374 Serial Number: 12342 00:30:43.374 Model Number: QEMU NVMe Ctrl 00:30:43.374 Firmware Version: 8.0.0 00:30:43.374 Recommended Arb Burst: 6 00:30:43.374 IEEE OUI Identifier: 00 54 52 00:30:43.374 Multi-path I/O 00:30:43.374 May have multiple subsystem ports: No 00:30:43.374 May have multiple controllers: No 00:30:43.374 Associated with SR-IOV VF: No 00:30:43.374 Max Data Transfer Size: 524288 00:30:43.374 Max Number of Namespaces: 256 00:30:43.374 Max Number of I/O Queues: 64 00:30:43.374 NVMe Specification Version (VS): 1.4 00:30:43.374 NVMe Specification Version (Identify): 1.4 00:30:43.374 Maximum Queue Entries: 2048 00:30:43.374 Contiguous Queues Required: Yes 00:30:43.374 Arbitration Mechanisms Supported 00:30:43.374 Weighted Round Robin: Not Supported 00:30:43.374 Vendor Specific: Not Supported 00:30:43.374 Reset Timeout: 7500 ms 00:30:43.374 Doorbell Stride: 4 bytes 00:30:43.374 NVM Subsystem Reset: Not Supported 00:30:43.374 Command Sets Supported 00:30:43.374 NVM Command Set: Supported 00:30:43.374 Boot Partition: Not Supported 00:30:43.374 Memory Page Size Minimum: 4096 bytes 00:30:43.374 Memory Page Size Maximum: 65536 bytes 00:30:43.374 Persistent Memory Region: Not Supported 00:30:43.374 Optional Asynchronous Events Supported 00:30:43.374 Namespace Attribute Notices: Supported 00:30:43.374 Firmware Activation Notices: Not Supported 00:30:43.374 ANA Change Notices: Not Supported 00:30:43.374 PLE Aggregate Log Change Notices: Not Supported 00:30:43.374 LBA Status Info Alert Notices: Not Supported 00:30:43.374 EGE Aggregate Log Change Notices: Not Supported 00:30:43.374 Normal NVM Subsystem Shutdown event: Not Supported 00:30:43.374 Zone Descriptor Change Notices: Not Supported 00:30:43.374 Discovery Log Change Notices: Not Supported 00:30:43.374 Controller Attributes 00:30:43.374 128-bit Host Identifier: Not Supported 00:30:43.374 Non-Operational Permissive Mode: Not Supported 00:30:43.374 NVM Sets: Not Supported 00:30:43.374 Read Recovery Levels: Not Supported 00:30:43.374 Endurance Groups: Not Supported 00:30:43.374 Predictable Latency Mode: Not Supported 00:30:43.374 Traffic Based Keep ALive: Not Supported 00:30:43.374 Namespace Granularity: Not Supported 00:30:43.374 SQ Associations: Not Supported 00:30:43.374 UUID List: Not Supported 00:30:43.374 Multi-Domain Subsystem: Not Supported 00:30:43.374 Fixed Capacity Management: Not Supported 00:30:43.374 Variable Capacity Management: Not Supported 00:30:43.374 Delete Endurance Group: Not Supported 00:30:43.374 Delete NVM Set: Not Supported 00:30:43.374 Extended LBA Formats Supported: Supported 00:30:43.374 Flexible Data Placement Supported: Not Supported 00:30:43.374 00:30:43.374 Controller Memory Buffer Support 00:30:43.374 ================================ 00:30:43.374 Supported: No 00:30:43.374 00:30:43.374 Persistent Memory Region Support 00:30:43.374 ================================ 00:30:43.374 Supported: No 00:30:43.374 00:30:43.374 Admin Command Set Attributes 00:30:43.374 ============================ 00:30:43.374 Security Send/Receive: Not Supported 00:30:43.374 Format NVM: Supported 00:30:43.374 Firmware Activate/Download: Not Supported 00:30:43.374 Namespace Management: Supported 00:30:43.374 Device Self-Test: Not Supported 00:30:43.374 Directives: Supported 00:30:43.374 NVMe-MI: Not Supported 00:30:43.374 Virtualization Management: Not Supported 00:30:43.374 Doorbell Buffer Config: Supported 00:30:43.374 Get LBA Status Capability: Not Supported 00:30:43.374 Command & Feature Lockdown Capability: Not Supported 00:30:43.374 Abort Command Limit: 4 00:30:43.375 Async Event Request Limit: 4 00:30:43.375 Number of Firmware Slots: N/A 00:30:43.375 Firmware Slot 1 Read-Only: N/A 00:30:43.375 Firmware Activation Without Reset: N/A 00:30:43.375 Multiple Update Detection Support: N/A 00:30:43.375 Firmware Update Granularity: No Information Provided 00:30:43.375 Per-Namespace SMART Log: Yes 00:30:43.375 Asymmetric Namespace Access Log Page: Not Supported 00:30:43.375 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:43.375 Command Effects Log Page: Supported 00:30:43.375 Get Log Page Extended Data: Supported 00:30:43.375 Telemetry Log Pages: Not Supported 00:30:43.375 Persistent Event Log Pages: Not Supported 00:30:43.375 Supported Log Pages Log Page: May Support 00:30:43.375 Commands Supported & Effects Log Page: Not Supported 00:30:43.375 Feature Identifiers & Effects Log Page:May Support 00:30:43.375 NVMe-MI Commands & Effects Log Page: May Support 00:30:43.375 Data Area 4 for Telemetry Log: Not Supported 00:30:43.375 Error Log Page Entries Supported: 1 00:30:43.375 Keep Alive: Not Supported 00:30:43.375 00:30:43.375 NVM Command Set Attributes 00:30:43.375 ========================== 00:30:43.375 Submission Queue Entry Size 00:30:43.375 Max: 64 00:30:43.375 Min: 64 00:30:43.375 Completion Queue Entry Size 00:30:43.375 Max: 16 00:30:43.375 Min: 16 00:30:43.375 Number of Namespaces: 256 00:30:43.375 Compare Command: Supported 00:30:43.375 Write Uncorrectable Command: Not Supported 00:30:43.375 Dataset Management Command: Supported 00:30:43.375 Write Zeroes Command: Supported 00:30:43.375 Set Features Save Field: Supported 00:30:43.375 Reservations: Not Supported 00:30:43.375 Timestamp: Supported 00:30:43.375 Copy: Supported 00:30:43.375 Volatile Write Cache: Present 00:30:43.375 Atomic Write Unit (Normal): 1 00:30:43.375 Atomic Write Unit (PFail): 1 00:30:43.375 Atomic Compare & Write Unit: 1 00:30:43.375 Fused Compare & Write: Not Supported 00:30:43.375 Scatter-Gather List 00:30:43.375 SGL Command Set: Supported 00:30:43.375 SGL Keyed: Not Supported 00:30:43.375 SGL Bit Bucket Descriptor: Not Supported 00:30:43.375 SGL Metadata Pointer: Not Supported 00:30:43.375 Oversized SGL: Not Supported 00:30:43.375 SGL Metadata Address: Not Supported 00:30:43.375 SGL Offset: Not Supported 00:30:43.375 Transport SGL Data Block: Not Supported 00:30:43.375 Replay Protected Memory Block: Not Supported 00:30:43.375 00:30:43.375 Firmware Slot Information 00:30:43.375 ========================= 00:30:43.375 Active slot: 1 00:30:43.375 Slot 1 Firmware Revision: 1.0 00:30:43.375 00:30:43.375 00:30:43.375 Commands Supported and Effects 00:30:43.375 ============================== 00:30:43.375 Admin Commands 00:30:43.375 -------------- 00:30:43.375 Delete I/O Submission Queue (00h): Supported 00:30:43.375 Create I/O Submission Queue (01h): Supported 00:30:43.375 Get Log Page (02h): Supported 00:30:43.375 Delete I/O Completion Queue (04h): Supported 00:30:43.375 Create I/O Completion Queue (05h): Supported 00:30:43.375 Identify (06h): Supported 00:30:43.375 Abort (08h): Supported 00:30:43.375 Set Features (09h): Supported 00:30:43.375 Get Features (0Ah): Supported 00:30:43.375 Asynchronous Event Request (0Ch): Supported 00:30:43.375 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:43.375 Directive Send (19h): Supported 00:30:43.375 Directive Receive (1Ah): Supported 00:30:43.375 Virtualization Management (1Ch): Supported 00:30:43.375 Doorbell Buffer Config (7Ch): Supported 00:30:43.375 Format NVM (80h): Supported LBA-Change 00:30:43.375 I/O Commands 00:30:43.375 ------------ 00:30:43.375 Flush (00h): Supported LBA-Change 00:30:43.375 Write (01h): Supported LBA-Change 00:30:43.375 Read (02h): Supported 00:30:43.375 Compare (05h): Supported 00:30:43.375 Write Zeroes (08h): Supported LBA-Change 00:30:43.375 Dataset Management (09h): Supported LBA-Change 00:30:43.375 Unknown (0Ch): Supported 00:30:43.375 Unknown (12h): Supported 00:30:43.375 Copy (19h): Supported LBA-Change 00:30:43.375 Unknown (1Dh): Supported LBA-Change 00:30:43.375 00:30:43.375 Error Log 00:30:43.375 ========= 00:30:43.375 00:30:43.375 Arbitration 00:30:43.375 =========== 00:30:43.375 Arbitration Burst: no limit 00:30:43.375 00:30:43.375 Power Management 00:30:43.375 ================ 00:30:43.375 Number of Power States: 1 00:30:43.375 Current Power State: Power State #0 00:30:43.375 Power State #0: 00:30:43.375 Max Power: 25.00 W 00:30:43.375 Non-Operational State: Operational 00:30:43.375 Entry Latency: 16 microseconds 00:30:43.375 Exit Latency: 4 microseconds 00:30:43.375 Relative Read Throughput: 0 00:30:43.375 Relative Read Latency: 0 00:30:43.375 Relative Write Throughput: 0 00:30:43.375 Relative Write Latency: 0 00:30:43.375 Idle Power: Not Reported 00:30:43.375 Active Power: Not Reported 00:30:43.375 Non-Operational Permissive Mode: Not Supported 00:30:43.375 00:30:43.375 Health Information 00:30:43.375 ================== 00:30:43.375 Critical Warnings: 00:30:43.375 Available Spare Space: OK 00:30:43.375 Temperature: OK 00:30:43.375 Device Reliability: OK 00:30:43.375 Read Only: No 00:30:43.375 Volatile Memory Backup: OK 00:30:43.375 Current Temperature: 323 Kelvin (50 Celsius) 00:30:43.375 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:43.375 Available Spare: 0% 00:30:43.375 Available Spare Threshold: 0% 00:30:43.375 Life Percentage Used: 0% 00:30:43.375 Data Units Read: 2222 00:30:43.375 Data Units Written: 2009 00:30:43.375 Host Read Commands: 96016 00:30:43.375 Host Write Commands: 94285 00:30:43.375 Controller Busy Time: 0 minutes 00:30:43.375 Power Cycles: 0 00:30:43.375 Power On Hours: 0 hours 00:30:43.375 Unsafe Shutdowns: 0 00:30:43.375 Unrecoverable Media Errors: 0 00:30:43.375 Lifetime Error Log Entries: 0 00:30:43.375 Warning Temperature Time: 0 minutes 00:30:43.375 Critical Temperature Time: 0 minutes 00:30:43.375 00:30:43.375 Number of Queues 00:30:43.375 ================ 00:30:43.375 Number of I/O Submission Queues: 64 00:30:43.375 Number of I/O Completion Queues: 64 00:30:43.375 00:30:43.375 ZNS Specific Controller Data 00:30:43.375 ============================ 00:30:43.375 Zone Append Size Limit: 0 00:30:43.375 00:30:43.375 00:30:43.375 Active Namespaces 00:30:43.375 ================= 00:30:43.375 Namespace ID:1 00:30:43.375 Error Recovery Timeout: Unlimited 00:30:43.375 Command Set Identifier: NVM (00h) 00:30:43.375 Deallocate: Supported 00:30:43.375 Deallocated/Unwritten Error: Supported 00:30:43.375 Deallocated Read Value: All 0x00 00:30:43.375 Deallocate in Write Zeroes: Not Supported 00:30:43.375 Deallocated Guard Field: 0xFFFF 00:30:43.375 Flush: Supported 00:30:43.375 Reservation: Not Supported 00:30:43.375 Namespace Sharing Capabilities: Private 00:30:43.375 Size (in LBAs): 1048576 (4GiB) 00:30:43.375 Capacity (in LBAs): 1048576 (4GiB) 00:30:43.375 Utilization (in LBAs): 1048576 (4GiB) 00:30:43.375 Thin Provisioning: Not Supported 00:30:43.375 Per-NS Atomic Units: No 00:30:43.375 Maximum Single Source Range Length: 128 00:30:43.375 Maximum Copy Length: 128 00:30:43.375 Maximum Source Range Count: 128 00:30:43.375 NGUID/EUI64 Never Reused: No 00:30:43.375 Namespace Write Protected: No 00:30:43.375 Number of LBA Formats: 8 00:30:43.375 Current LBA Format: LBA Format #04 00:30:43.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:43.375 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:43.375 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:43.375 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:43.375 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:43.375 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:43.375 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:43.375 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:43.375 00:30:43.375 NVM Specific Namespace Data 00:30:43.375 =========================== 00:30:43.375 Logical Block Storage Tag Mask: 0 00:30:43.375 Protection Information Capabilities: 00:30:43.375 16b Guard Protection Information Storage Tag Support: No 00:30:43.375 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:43.375 Storage Tag Check Read Support: No 00:30:43.375 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.375 Namespace ID:2 00:30:43.375 Error Recovery Timeout: Unlimited 00:30:43.376 Command Set Identifier: NVM (00h) 00:30:43.376 Deallocate: Supported 00:30:43.376 Deallocated/Unwritten Error: Supported 00:30:43.376 Deallocated Read Value: All 0x00 00:30:43.376 Deallocate in Write Zeroes: Not Supported 00:30:43.376 Deallocated Guard Field: 0xFFFF 00:30:43.376 Flush: Supported 00:30:43.376 Reservation: Not Supported 00:30:43.376 Namespace Sharing Capabilities: Private 00:30:43.376 Size (in LBAs): 1048576 (4GiB) 00:30:43.376 Capacity (in LBAs): 1048576 (4GiB) 00:30:43.376 Utilization (in LBAs): 1048576 (4GiB) 00:30:43.376 Thin Provisioning: Not Supported 00:30:43.376 Per-NS Atomic Units: No 00:30:43.376 Maximum Single Source Range Length: 128 00:30:43.376 Maximum Copy Length: 128 00:30:43.376 Maximum Source Range Count: 128 00:30:43.376 NGUID/EUI64 Never Reused: No 00:30:43.376 Namespace Write Protected: No 00:30:43.376 Number of LBA Formats: 8 00:30:43.376 Current LBA Format: LBA Format #04 00:30:43.376 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:43.376 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:43.376 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:43.376 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:43.376 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:43.376 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:43.376 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:43.376 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:43.376 00:30:43.376 NVM Specific Namespace Data 00:30:43.376 =========================== 00:30:43.376 Logical Block Storage Tag Mask: 0 00:30:43.376 Protection Information Capabilities: 00:30:43.376 16b Guard Protection Information Storage Tag Support: No 00:30:43.376 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:43.376 Storage Tag Check Read Support: No 00:30:43.376 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Namespace ID:3 00:30:43.376 Error Recovery Timeout: Unlimited 00:30:43.376 Command Set Identifier: NVM (00h) 00:30:43.376 Deallocate: Supported 00:30:43.376 Deallocated/Unwritten Error: Supported 00:30:43.376 Deallocated Read Value: All 0x00 00:30:43.376 Deallocate in Write Zeroes: Not Supported 00:30:43.376 Deallocated Guard Field: 0xFFFF 00:30:43.376 Flush: Supported 00:30:43.376 Reservation: Not Supported 00:30:43.376 Namespace Sharing Capabilities: Private 00:30:43.376 Size (in LBAs): 1048576 (4GiB) 00:30:43.376 Capacity (in LBAs): 1048576 (4GiB) 00:30:43.376 Utilization (in LBAs): 1048576 (4GiB) 00:30:43.376 Thin Provisioning: Not Supported 00:30:43.376 Per-NS Atomic Units: No 00:30:43.376 Maximum Single Source Range Length: 128 00:30:43.376 Maximum Copy Length: 128 00:30:43.376 Maximum Source Range Count: 128 00:30:43.376 NGUID/EUI64 Never Reused: No 00:30:43.376 Namespace Write Protected: No 00:30:43.376 Number of LBA Formats: 8 00:30:43.376 Current LBA Format: LBA Format #04 00:30:43.376 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:43.376 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:43.376 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:43.376 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:43.376 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:43.376 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:43.376 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:43.376 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:43.376 00:30:43.376 NVM Specific Namespace Data 00:30:43.376 =========================== 00:30:43.376 Logical Block Storage Tag Mask: 0 00:30:43.376 Protection Information Capabilities: 00:30:43.376 16b Guard Protection Information Storage Tag Support: No 00:30:43.376 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:43.376 Storage Tag Check Read Support: No 00:30:43.376 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.376 17:27:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:43.376 17:27:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:30:43.945 ===================================================== 00:30:43.945 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:43.945 ===================================================== 00:30:43.945 Controller Capabilities/Features 00:30:43.945 ================================ 00:30:43.945 Vendor ID: 1b36 00:30:43.945 Subsystem Vendor ID: 1af4 00:30:43.945 Serial Number: 12343 00:30:43.945 Model Number: QEMU NVMe Ctrl 00:30:43.945 Firmware Version: 8.0.0 00:30:43.945 Recommended Arb Burst: 6 00:30:43.945 IEEE OUI Identifier: 00 54 52 00:30:43.945 Multi-path I/O 00:30:43.945 May have multiple subsystem ports: No 00:30:43.945 May have multiple controllers: Yes 00:30:43.945 Associated with SR-IOV VF: No 00:30:43.945 Max Data Transfer Size: 524288 00:30:43.945 Max Number of Namespaces: 256 00:30:43.945 Max Number of I/O Queues: 64 00:30:43.945 NVMe Specification Version (VS): 1.4 00:30:43.945 NVMe Specification Version (Identify): 1.4 00:30:43.945 Maximum Queue Entries: 2048 00:30:43.945 Contiguous Queues Required: Yes 00:30:43.945 Arbitration Mechanisms Supported 00:30:43.945 Weighted Round Robin: Not Supported 00:30:43.945 Vendor Specific: Not Supported 00:30:43.945 Reset Timeout: 7500 ms 00:30:43.945 Doorbell Stride: 4 bytes 00:30:43.945 NVM Subsystem Reset: Not Supported 00:30:43.945 Command Sets Supported 00:30:43.945 NVM Command Set: Supported 00:30:43.945 Boot Partition: Not Supported 00:30:43.945 Memory Page Size Minimum: 4096 bytes 00:30:43.945 Memory Page Size Maximum: 65536 bytes 00:30:43.945 Persistent Memory Region: Not Supported 00:30:43.945 Optional Asynchronous Events Supported 00:30:43.945 Namespace Attribute Notices: Supported 00:30:43.945 Firmware Activation Notices: Not Supported 00:30:43.945 ANA Change Notices: Not Supported 00:30:43.945 PLE Aggregate Log Change Notices: Not Supported 00:30:43.945 LBA Status Info Alert Notices: Not Supported 00:30:43.945 EGE Aggregate Log Change Notices: Not Supported 00:30:43.945 Normal NVM Subsystem Shutdown event: Not Supported 00:30:43.945 Zone Descriptor Change Notices: Not Supported 00:30:43.945 Discovery Log Change Notices: Not Supported 00:30:43.945 Controller Attributes 00:30:43.945 128-bit Host Identifier: Not Supported 00:30:43.945 Non-Operational Permissive Mode: Not Supported 00:30:43.945 NVM Sets: Not Supported 00:30:43.945 Read Recovery Levels: Not Supported 00:30:43.945 Endurance Groups: Supported 00:30:43.945 Predictable Latency Mode: Not Supported 00:30:43.945 Traffic Based Keep ALive: Not Supported 00:30:43.945 Namespace Granularity: Not Supported 00:30:43.945 SQ Associations: Not Supported 00:30:43.945 UUID List: Not Supported 00:30:43.945 Multi-Domain Subsystem: Not Supported 00:30:43.945 Fixed Capacity Management: Not Supported 00:30:43.945 Variable Capacity Management: Not Supported 00:30:43.945 Delete Endurance Group: Not Supported 00:30:43.945 Delete NVM Set: Not Supported 00:30:43.945 Extended LBA Formats Supported: Supported 00:30:43.945 Flexible Data Placement Supported: Supported 00:30:43.945 00:30:43.945 Controller Memory Buffer Support 00:30:43.945 ================================ 00:30:43.945 Supported: No 00:30:43.945 00:30:43.945 Persistent Memory Region Support 00:30:43.945 ================================ 00:30:43.945 Supported: No 00:30:43.945 00:30:43.945 Admin Command Set Attributes 00:30:43.945 ============================ 00:30:43.945 Security Send/Receive: Not Supported 00:30:43.945 Format NVM: Supported 00:30:43.945 Firmware Activate/Download: Not Supported 00:30:43.945 Namespace Management: Supported 00:30:43.945 Device Self-Test: Not Supported 00:30:43.945 Directives: Supported 00:30:43.945 NVMe-MI: Not Supported 00:30:43.945 Virtualization Management: Not Supported 00:30:43.945 Doorbell Buffer Config: Supported 00:30:43.945 Get LBA Status Capability: Not Supported 00:30:43.945 Command & Feature Lockdown Capability: Not Supported 00:30:43.945 Abort Command Limit: 4 00:30:43.945 Async Event Request Limit: 4 00:30:43.945 Number of Firmware Slots: N/A 00:30:43.945 Firmware Slot 1 Read-Only: N/A 00:30:43.945 Firmware Activation Without Reset: N/A 00:30:43.945 Multiple Update Detection Support: N/A 00:30:43.945 Firmware Update Granularity: No Information Provided 00:30:43.945 Per-Namespace SMART Log: Yes 00:30:43.945 Asymmetric Namespace Access Log Page: Not Supported 00:30:43.945 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:43.945 Command Effects Log Page: Supported 00:30:43.945 Get Log Page Extended Data: Supported 00:30:43.945 Telemetry Log Pages: Not Supported 00:30:43.945 Persistent Event Log Pages: Not Supported 00:30:43.945 Supported Log Pages Log Page: May Support 00:30:43.945 Commands Supported & Effects Log Page: Not Supported 00:30:43.945 Feature Identifiers & Effects Log Page:May Support 00:30:43.945 NVMe-MI Commands & Effects Log Page: May Support 00:30:43.945 Data Area 4 for Telemetry Log: Not Supported 00:30:43.945 Error Log Page Entries Supported: 1 00:30:43.945 Keep Alive: Not Supported 00:30:43.945 00:30:43.945 NVM Command Set Attributes 00:30:43.945 ========================== 00:30:43.945 Submission Queue Entry Size 00:30:43.945 Max: 64 00:30:43.945 Min: 64 00:30:43.945 Completion Queue Entry Size 00:30:43.945 Max: 16 00:30:43.945 Min: 16 00:30:43.945 Number of Namespaces: 256 00:30:43.945 Compare Command: Supported 00:30:43.945 Write Uncorrectable Command: Not Supported 00:30:43.945 Dataset Management Command: Supported 00:30:43.945 Write Zeroes Command: Supported 00:30:43.945 Set Features Save Field: Supported 00:30:43.945 Reservations: Not Supported 00:30:43.945 Timestamp: Supported 00:30:43.945 Copy: Supported 00:30:43.945 Volatile Write Cache: Present 00:30:43.945 Atomic Write Unit (Normal): 1 00:30:43.945 Atomic Write Unit (PFail): 1 00:30:43.945 Atomic Compare & Write Unit: 1 00:30:43.945 Fused Compare & Write: Not Supported 00:30:43.945 Scatter-Gather List 00:30:43.946 SGL Command Set: Supported 00:30:43.946 SGL Keyed: Not Supported 00:30:43.946 SGL Bit Bucket Descriptor: Not Supported 00:30:43.946 SGL Metadata Pointer: Not Supported 00:30:43.946 Oversized SGL: Not Supported 00:30:43.946 SGL Metadata Address: Not Supported 00:30:43.946 SGL Offset: Not Supported 00:30:43.946 Transport SGL Data Block: Not Supported 00:30:43.946 Replay Protected Memory Block: Not Supported 00:30:43.946 00:30:43.946 Firmware Slot Information 00:30:43.946 ========================= 00:30:43.946 Active slot: 1 00:30:43.946 Slot 1 Firmware Revision: 1.0 00:30:43.946 00:30:43.946 00:30:43.946 Commands Supported and Effects 00:30:43.946 ============================== 00:30:43.946 Admin Commands 00:30:43.946 -------------- 00:30:43.946 Delete I/O Submission Queue (00h): Supported 00:30:43.946 Create I/O Submission Queue (01h): Supported 00:30:43.946 Get Log Page (02h): Supported 00:30:43.946 Delete I/O Completion Queue (04h): Supported 00:30:43.946 Create I/O Completion Queue (05h): Supported 00:30:43.946 Identify (06h): Supported 00:30:43.946 Abort (08h): Supported 00:30:43.946 Set Features (09h): Supported 00:30:43.946 Get Features (0Ah): Supported 00:30:43.946 Asynchronous Event Request (0Ch): Supported 00:30:43.946 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:43.946 Directive Send (19h): Supported 00:30:43.946 Directive Receive (1Ah): Supported 00:30:43.946 Virtualization Management (1Ch): Supported 00:30:43.946 Doorbell Buffer Config (7Ch): Supported 00:30:43.946 Format NVM (80h): Supported LBA-Change 00:30:43.946 I/O Commands 00:30:43.946 ------------ 00:30:43.946 Flush (00h): Supported LBA-Change 00:30:43.946 Write (01h): Supported LBA-Change 00:30:43.946 Read (02h): Supported 00:30:43.946 Compare (05h): Supported 00:30:43.946 Write Zeroes (08h): Supported LBA-Change 00:30:43.946 Dataset Management (09h): Supported LBA-Change 00:30:43.946 Unknown (0Ch): Supported 00:30:43.946 Unknown (12h): Supported 00:30:43.946 Copy (19h): Supported LBA-Change 00:30:43.946 Unknown (1Dh): Supported LBA-Change 00:30:43.946 00:30:43.946 Error Log 00:30:43.946 ========= 00:30:43.946 00:30:43.946 Arbitration 00:30:43.946 =========== 00:30:43.946 Arbitration Burst: no limit 00:30:43.946 00:30:43.946 Power Management 00:30:43.946 ================ 00:30:43.946 Number of Power States: 1 00:30:43.946 Current Power State: Power State #0 00:30:43.946 Power State #0: 00:30:43.946 Max Power: 25.00 W 00:30:43.946 Non-Operational State: Operational 00:30:43.946 Entry Latency: 16 microseconds 00:30:43.946 Exit Latency: 4 microseconds 00:30:43.946 Relative Read Throughput: 0 00:30:43.946 Relative Read Latency: 0 00:30:43.946 Relative Write Throughput: 0 00:30:43.946 Relative Write Latency: 0 00:30:43.946 Idle Power: Not Reported 00:30:43.946 Active Power: Not Reported 00:30:43.946 Non-Operational Permissive Mode: Not Supported 00:30:43.946 00:30:43.946 Health Information 00:30:43.946 ================== 00:30:43.946 Critical Warnings: 00:30:43.946 Available Spare Space: OK 00:30:43.946 Temperature: OK 00:30:43.946 Device Reliability: OK 00:30:43.946 Read Only: No 00:30:43.946 Volatile Memory Backup: OK 00:30:43.946 Current Temperature: 323 Kelvin (50 Celsius) 00:30:43.946 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:43.946 Available Spare: 0% 00:30:43.946 Available Spare Threshold: 0% 00:30:43.946 Life Percentage Used: 0% 00:30:43.946 Data Units Read: 902 00:30:43.946 Data Units Written: 832 00:30:43.946 Host Read Commands: 33288 00:30:43.946 Host Write Commands: 32711 00:30:43.946 Controller Busy Time: 0 minutes 00:30:43.946 Power Cycles: 0 00:30:43.946 Power On Hours: 0 hours 00:30:43.946 Unsafe Shutdowns: 0 00:30:43.946 Unrecoverable Media Errors: 0 00:30:43.946 Lifetime Error Log Entries: 0 00:30:43.946 Warning Temperature Time: 0 minutes 00:30:43.946 Critical Temperature Time: 0 minutes 00:30:43.946 00:30:43.946 Number of Queues 00:30:43.946 ================ 00:30:43.946 Number of I/O Submission Queues: 64 00:30:43.946 Number of I/O Completion Queues: 64 00:30:43.946 00:30:43.946 ZNS Specific Controller Data 00:30:43.946 ============================ 00:30:43.946 Zone Append Size Limit: 0 00:30:43.946 00:30:43.946 00:30:43.946 Active Namespaces 00:30:43.946 ================= 00:30:43.946 Namespace ID:1 00:30:43.946 Error Recovery Timeout: Unlimited 00:30:43.946 Command Set Identifier: NVM (00h) 00:30:43.946 Deallocate: Supported 00:30:43.946 Deallocated/Unwritten Error: Supported 00:30:43.946 Deallocated Read Value: All 0x00 00:30:43.946 Deallocate in Write Zeroes: Not Supported 00:30:43.946 Deallocated Guard Field: 0xFFFF 00:30:43.946 Flush: Supported 00:30:43.946 Reservation: Not Supported 00:30:43.946 Namespace Sharing Capabilities: Multiple Controllers 00:30:43.946 Size (in LBAs): 262144 (1GiB) 00:30:43.946 Capacity (in LBAs): 262144 (1GiB) 00:30:43.946 Utilization (in LBAs): 262144 (1GiB) 00:30:43.946 Thin Provisioning: Not Supported 00:30:43.946 Per-NS Atomic Units: No 00:30:43.946 Maximum Single Source Range Length: 128 00:30:43.946 Maximum Copy Length: 128 00:30:43.946 Maximum Source Range Count: 128 00:30:43.946 NGUID/EUI64 Never Reused: No 00:30:43.946 Namespace Write Protected: No 00:30:43.946 Endurance group ID: 1 00:30:43.946 Number of LBA Formats: 8 00:30:43.946 Current LBA Format: LBA Format #04 00:30:43.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:43.946 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:43.946 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:43.946 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:43.946 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:43.946 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:43.946 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:43.946 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:43.946 00:30:43.946 Get Feature FDP: 00:30:43.946 ================ 00:30:43.946 Enabled: Yes 00:30:43.946 FDP configuration index: 0 00:30:43.946 00:30:43.946 FDP configurations log page 00:30:43.946 =========================== 00:30:43.946 Number of FDP configurations: 1 00:30:43.946 Version: 0 00:30:43.946 Size: 112 00:30:43.946 FDP Configuration Descriptor: 0 00:30:43.946 Descriptor Size: 96 00:30:43.946 Reclaim Group Identifier format: 2 00:30:43.946 FDP Volatile Write Cache: Not Present 00:30:43.946 FDP Configuration: Valid 00:30:43.946 Vendor Specific Size: 0 00:30:43.946 Number of Reclaim Groups: 2 00:30:43.946 Number of Recalim Unit Handles: 8 00:30:43.946 Max Placement Identifiers: 128 00:30:43.946 Number of Namespaces Suppprted: 256 00:30:43.946 Reclaim unit Nominal Size: 6000000 bytes 00:30:43.946 Estimated Reclaim Unit Time Limit: Not Reported 00:30:43.946 RUH Desc #000: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #001: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #002: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #003: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #004: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #005: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #006: RUH Type: Initially Isolated 00:30:43.946 RUH Desc #007: RUH Type: Initially Isolated 00:30:43.946 00:30:43.946 FDP reclaim unit handle usage log page 00:30:43.946 ====================================== 00:30:43.946 Number of Reclaim Unit Handles: 8 00:30:43.946 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:43.946 RUH Usage Desc #001: RUH Attributes: Unused 00:30:43.946 RUH Usage Desc #002: RUH Attributes: Unused 00:30:43.946 RUH Usage Desc #003: RUH Attributes: Unused 00:30:43.946 RUH Usage Desc #004: RUH Attributes: Unused 00:30:43.946 RUH Usage Desc #005: RUH Attributes: Unused 00:30:43.946 RUH Usage Desc #006: RUH Attributes: Unused 00:30:43.946 RUH Usage Desc #007: RUH Attributes: Unused 00:30:43.946 00:30:43.946 FDP statistics log page 00:30:43.946 ======================= 00:30:43.946 Host bytes with metadata written: 510697472 00:30:43.946 Media bytes with metadata written: 512786432 00:30:43.946 Media bytes erased: 0 00:30:43.946 00:30:43.946 FDP events log page 00:30:43.946 =================== 00:30:43.946 Number of FDP events: 0 00:30:43.946 00:30:43.946 NVM Specific Namespace Data 00:30:43.946 =========================== 00:30:43.946 Logical Block Storage Tag Mask: 0 00:30:43.946 Protection Information Capabilities: 00:30:43.946 16b Guard Protection Information Storage Tag Support: No 00:30:43.946 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:43.946 Storage Tag Check Read Support: No 00:30:43.946 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.946 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.946 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.947 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.947 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.947 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.947 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.947 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:43.947 00:30:43.947 real 0m1.927s 00:30:43.947 user 0m0.745s 00:30:43.947 sys 0m0.929s 00:30:43.947 17:27:21 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.947 17:27:21 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.947 ************************************ 00:30:43.947 END TEST nvme_identify 00:30:43.947 ************************************ 00:30:43.947 17:27:21 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:43.947 17:27:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:43.947 17:27:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.947 17:27:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:43.947 ************************************ 00:30:43.947 START TEST nvme_perf 00:30:43.947 ************************************ 00:30:43.947 17:27:21 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:30:43.947 17:27:21 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:45.327 Initializing NVMe Controllers 00:30:45.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:45.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:45.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:45.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:45.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:45.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:45.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:45.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:45.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:45.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:45.327 Initialization complete. Launching workers. 00:30:45.327 ======================================================== 00:30:45.327 Latency(us) 00:30:45.327 Device Information : IOPS MiB/s Average min max 00:30:45.327 PCIE (0000:00:10.0) NSID 1 from core 0: 15061.35 176.50 8516.28 6733.32 50227.30 00:30:45.327 PCIE (0000:00:11.0) NSID 1 from core 0: 15061.35 176.50 8495.62 6808.14 46531.17 00:30:45.327 PCIE (0000:00:13.0) NSID 1 from core 0: 15061.35 176.50 8473.46 6819.17 43847.99 00:30:45.327 PCIE (0000:00:12.0) NSID 1 from core 0: 15061.35 176.50 8455.64 6817.69 40919.51 00:30:45.327 PCIE (0000:00:12.0) NSID 2 from core 0: 15061.35 176.50 8439.79 6817.88 38549.35 00:30:45.327 PCIE (0000:00:12.0) NSID 3 from core 0: 15125.17 177.25 8387.99 6812.11 31179.09 00:30:45.327 ======================================================== 00:30:45.327 Total : 90431.90 1059.75 8461.41 6733.32 50227.30 00:30:45.327 00:30:45.327 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:45.327 ================================================================================= 00:30:45.327 1.00000% : 6925.638us 00:30:45.327 10.00000% : 7297.677us 00:30:45.327 25.00000% : 7612.479us 00:30:45.327 50.00000% : 7955.899us 00:30:45.327 75.00000% : 8413.792us 00:30:45.327 90.00000% : 9100.632us 00:30:45.327 95.00000% : 10417.076us 00:30:45.327 98.00000% : 14595.354us 00:30:45.327 99.00000% : 16827.584us 00:30:45.327 99.50000% : 42813.038us 00:30:45.327 99.90000% : 49452.493us 00:30:45.327 99.99000% : 50139.333us 00:30:45.327 99.99900% : 50368.279us 00:30:45.327 99.99990% : 50368.279us 00:30:45.327 99.99999% : 50368.279us 00:30:45.327 00:30:45.327 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:45.327 ================================================================================= 00:30:45.327 1.00000% : 7011.493us 00:30:45.327 10.00000% : 7383.532us 00:30:45.327 25.00000% : 7669.715us 00:30:45.327 50.00000% : 7955.899us 00:30:45.327 75.00000% : 8356.555us 00:30:45.327 90.00000% : 9100.632us 00:30:45.327 95.00000% : 10588.786us 00:30:45.327 98.00000% : 14767.064us 00:30:45.327 99.00000% : 17171.004us 00:30:45.327 99.50000% : 39836.730us 00:30:45.327 99.90000% : 46018.292us 00:30:45.327 99.99000% : 46705.132us 00:30:45.327 99.99900% : 46705.132us 00:30:45.327 99.99990% : 46705.132us 00:30:45.327 99.99999% : 46705.132us 00:30:45.327 00:30:45.327 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:45.327 ================================================================================= 00:30:45.327 1.00000% : 7011.493us 00:30:45.327 10.00000% : 7383.532us 00:30:45.327 25.00000% : 7612.479us 00:30:45.327 50.00000% : 7955.899us 00:30:45.327 75.00000% : 8356.555us 00:30:45.327 90.00000% : 9157.869us 00:30:45.327 95.00000% : 10588.786us 00:30:45.327 98.00000% : 14366.407us 00:30:45.327 99.00000% : 16484.164us 00:30:45.327 99.50000% : 37089.369us 00:30:45.327 99.90000% : 43499.878us 00:30:45.327 99.99000% : 43957.771us 00:30:45.327 99.99900% : 43957.771us 00:30:45.327 99.99990% : 43957.771us 00:30:45.327 99.99999% : 43957.771us 00:30:45.327 00:30:45.327 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:45.327 ================================================================================= 00:30:45.327 1.00000% : 7011.493us 00:30:45.327 10.00000% : 7383.532us 00:30:45.327 25.00000% : 7669.715us 00:30:45.327 50.00000% : 7955.899us 00:30:45.327 75.00000% : 8356.555us 00:30:45.327 90.00000% : 9157.869us 00:30:45.327 95.00000% : 10646.023us 00:30:45.327 98.00000% : 14652.590us 00:30:45.327 99.00000% : 16942.058us 00:30:45.327 99.50000% : 34570.955us 00:30:45.327 99.90000% : 40523.570us 00:30:45.327 99.99000% : 40981.464us 00:30:45.327 99.99900% : 40981.464us 00:30:45.327 99.99990% : 40981.464us 00:30:45.327 99.99999% : 40981.464us 00:30:45.327 00:30:45.327 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:45.327 ================================================================================= 00:30:45.327 1.00000% : 7011.493us 00:30:45.327 10.00000% : 7383.532us 00:30:45.327 25.00000% : 7669.715us 00:30:45.327 50.00000% : 7955.899us 00:30:45.327 75.00000% : 8356.555us 00:30:45.327 90.00000% : 9100.632us 00:30:45.327 95.00000% : 10760.496us 00:30:45.327 98.00000% : 14996.010us 00:30:45.327 99.00000% : 16827.584us 00:30:45.327 99.50000% : 31823.595us 00:30:45.327 99.90000% : 38234.103us 00:30:45.327 99.99000% : 38691.997us 00:30:45.327 99.99900% : 38691.997us 00:30:45.327 99.99990% : 38691.997us 00:30:45.327 99.99999% : 38691.997us 00:30:45.327 00:30:45.327 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:45.327 ================================================================================= 00:30:45.327 1.00000% : 7011.493us 00:30:45.327 10.00000% : 7383.532us 00:30:45.327 25.00000% : 7669.715us 00:30:45.327 50.00000% : 7955.899us 00:30:45.327 75.00000% : 8356.555us 00:30:45.327 90.00000% : 9157.869us 00:30:45.327 95.00000% : 10760.496us 00:30:45.327 98.00000% : 15224.957us 00:30:45.327 99.00000% : 16484.164us 00:30:45.327 99.50000% : 24611.773us 00:30:45.327 99.90000% : 30907.808us 00:30:45.327 99.99000% : 31365.701us 00:30:45.327 99.99900% : 31365.701us 00:30:45.327 99.99990% : 31365.701us 00:30:45.327 99.99999% : 31365.701us 00:30:45.327 00:30:45.327 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:45.327 ============================================================================== 00:30:45.327 Range in us Cumulative IO count 00:30:45.327 6725.310 - 6753.928: 0.0265% ( 4) 00:30:45.327 6753.928 - 6782.547: 0.0927% ( 10) 00:30:45.327 6782.547 - 6811.165: 0.1589% ( 10) 00:30:45.327 6811.165 - 6839.783: 0.3774% ( 33) 00:30:45.327 6839.783 - 6868.402: 0.5561% ( 27) 00:30:45.327 6868.402 - 6897.020: 0.9203% ( 55) 00:30:45.327 6897.020 - 6925.638: 1.3043% ( 58) 00:30:45.327 6925.638 - 6954.257: 1.6684% ( 55) 00:30:45.327 6954.257 - 6982.875: 2.1186% ( 68) 00:30:45.327 6982.875 - 7011.493: 2.6284% ( 77) 00:30:45.327 7011.493 - 7040.112: 3.1449% ( 78) 00:30:45.327 7040.112 - 7068.730: 3.7738% ( 95) 00:30:45.327 7068.730 - 7097.348: 4.3697% ( 90) 00:30:45.327 7097.348 - 7125.967: 5.0318% ( 100) 00:30:45.327 7125.967 - 7154.585: 5.7336% ( 106) 00:30:45.327 7154.585 - 7183.203: 6.4817% ( 113) 00:30:45.327 7183.203 - 7211.822: 7.3689% ( 134) 00:30:45.327 7211.822 - 7240.440: 8.1766% ( 122) 00:30:45.327 7240.440 - 7269.059: 9.0704% ( 135) 00:30:45.327 7269.059 - 7297.677: 10.1165% ( 158) 00:30:45.327 7297.677 - 7326.295: 11.1295% ( 153) 00:30:45.327 7326.295 - 7383.532: 13.5196% ( 361) 00:30:45.327 7383.532 - 7440.769: 16.5519% ( 458) 00:30:45.327 7440.769 - 7498.005: 19.7630% ( 485) 00:30:45.327 7498.005 - 7555.242: 23.2455% ( 526) 00:30:45.327 7555.242 - 7612.479: 26.8803% ( 549) 00:30:45.327 7612.479 - 7669.715: 30.7932% ( 591) 00:30:45.327 7669.715 - 7726.952: 34.8252% ( 609) 00:30:45.327 7726.952 - 7784.189: 38.7315% ( 590) 00:30:45.327 7784.189 - 7841.425: 42.7701% ( 610) 00:30:45.327 7841.425 - 7898.662: 46.7227% ( 597) 00:30:45.327 7898.662 - 7955.899: 50.5363% ( 576) 00:30:45.327 7955.899 - 8013.135: 54.2505% ( 561) 00:30:45.327 8013.135 - 8070.372: 57.9052% ( 552) 00:30:45.327 8070.372 - 8127.609: 61.3347% ( 518) 00:30:45.327 8127.609 - 8184.845: 64.6186% ( 496) 00:30:45.328 8184.845 - 8242.082: 67.6708% ( 461) 00:30:45.328 8242.082 - 8299.319: 70.5575% ( 436) 00:30:45.328 8299.319 - 8356.555: 73.3779% ( 426) 00:30:45.328 8356.555 - 8413.792: 75.9997% ( 396) 00:30:45.328 8413.792 - 8471.029: 78.3700% ( 358) 00:30:45.328 8471.029 - 8528.266: 80.6144% ( 339) 00:30:45.328 8528.266 - 8585.502: 82.6536% ( 308) 00:30:45.328 8585.502 - 8642.739: 84.3088% ( 250) 00:30:45.328 8642.739 - 8699.976: 85.7190% ( 213) 00:30:45.328 8699.976 - 8757.212: 86.8578% ( 172) 00:30:45.328 8757.212 - 8814.449: 87.7383% ( 133) 00:30:45.328 8814.449 - 8871.686: 88.3872% ( 98) 00:30:45.328 8871.686 - 8928.922: 88.8573% ( 71) 00:30:45.328 8928.922 - 8986.159: 89.3273% ( 71) 00:30:45.328 8986.159 - 9043.396: 89.7312% ( 61) 00:30:45.328 9043.396 - 9100.632: 90.1086% ( 57) 00:30:45.328 9100.632 - 9157.869: 90.4793% ( 56) 00:30:45.328 9157.869 - 9215.106: 90.8170% ( 51) 00:30:45.328 9215.106 - 9272.342: 91.1811% ( 55) 00:30:45.328 9272.342 - 9329.579: 91.4526% ( 41) 00:30:45.328 9329.579 - 9386.816: 91.7505% ( 45) 00:30:45.328 9386.816 - 9444.052: 92.0154% ( 40) 00:30:45.328 9444.052 - 9501.289: 92.2868% ( 41) 00:30:45.328 9501.289 - 9558.526: 92.4921% ( 31) 00:30:45.328 9558.526 - 9615.762: 92.7105% ( 33) 00:30:45.328 9615.762 - 9672.999: 92.9158% ( 31) 00:30:45.328 9672.999 - 9730.236: 93.1210% ( 31) 00:30:45.328 9730.236 - 9787.472: 93.3263% ( 31) 00:30:45.328 9787.472 - 9844.709: 93.5580% ( 35) 00:30:45.328 9844.709 - 9901.946: 93.7831% ( 34) 00:30:45.328 9901.946 - 9959.183: 94.0215% ( 36) 00:30:45.328 9959.183 - 10016.419: 94.2068% ( 28) 00:30:45.328 10016.419 - 10073.656: 94.3525% ( 22) 00:30:45.328 10073.656 - 10130.893: 94.4915% ( 21) 00:30:45.328 10130.893 - 10188.129: 94.6107% ( 18) 00:30:45.328 10188.129 - 10245.366: 94.7100% ( 15) 00:30:45.328 10245.366 - 10302.603: 94.8358% ( 19) 00:30:45.328 10302.603 - 10359.839: 94.9219% ( 13) 00:30:45.328 10359.839 - 10417.076: 95.0278% ( 16) 00:30:45.328 10417.076 - 10474.313: 95.1073% ( 12) 00:30:45.328 10474.313 - 10531.549: 95.1867% ( 12) 00:30:45.328 10531.549 - 10588.786: 95.2397% ( 8) 00:30:45.328 10588.786 - 10646.023: 95.3059% ( 10) 00:30:45.328 10646.023 - 10703.259: 95.3655% ( 9) 00:30:45.328 10703.259 - 10760.496: 95.4052% ( 6) 00:30:45.328 10760.496 - 10817.733: 95.4582% ( 8) 00:30:45.328 10817.733 - 10874.969: 95.5244% ( 10) 00:30:45.328 10874.969 - 10932.206: 95.5641% ( 6) 00:30:45.328 10932.206 - 10989.443: 95.5840% ( 3) 00:30:45.328 10989.443 - 11046.679: 95.6104% ( 4) 00:30:45.328 11046.679 - 11103.916: 95.6435% ( 5) 00:30:45.328 11103.916 - 11161.153: 95.6766% ( 5) 00:30:45.328 11161.153 - 11218.390: 95.7031% ( 4) 00:30:45.328 11218.390 - 11275.626: 95.7296% ( 4) 00:30:45.328 11275.626 - 11332.863: 95.7760% ( 7) 00:30:45.328 11332.863 - 11390.100: 95.8091% ( 5) 00:30:45.328 11390.100 - 11447.336: 95.8554% ( 7) 00:30:45.328 11447.336 - 11504.573: 95.8819% ( 4) 00:30:45.328 11504.573 - 11561.810: 95.9150% ( 5) 00:30:45.328 11561.810 - 11619.046: 95.9415% ( 4) 00:30:45.328 11619.046 - 11676.283: 95.9680% ( 4) 00:30:45.328 11676.283 - 11733.520: 95.9944% ( 4) 00:30:45.328 11733.520 - 11790.756: 96.0209% ( 4) 00:30:45.328 11790.756 - 11847.993: 96.0540% ( 5) 00:30:45.328 11847.993 - 11905.230: 96.0805% ( 4) 00:30:45.328 11905.230 - 11962.466: 96.1004% ( 3) 00:30:45.328 11962.466 - 12019.703: 96.1335% ( 5) 00:30:45.328 12019.703 - 12076.940: 96.1864% ( 8) 00:30:45.328 12076.940 - 12134.176: 96.2460% ( 9) 00:30:45.328 12134.176 - 12191.413: 96.3122% ( 10) 00:30:45.328 12191.413 - 12248.650: 96.3784% ( 10) 00:30:45.328 12248.650 - 12305.886: 96.4447% ( 10) 00:30:45.328 12305.886 - 12363.123: 96.5109% ( 10) 00:30:45.328 12363.123 - 12420.360: 96.5704% ( 9) 00:30:45.328 12420.360 - 12477.597: 96.6433% ( 11) 00:30:45.328 12477.597 - 12534.833: 96.7227% ( 12) 00:30:45.328 12534.833 - 12592.070: 96.7757% ( 8) 00:30:45.328 12592.070 - 12649.307: 96.8419% ( 10) 00:30:45.328 12649.307 - 12706.543: 96.9015% ( 9) 00:30:45.328 12706.543 - 12763.780: 96.9478% ( 7) 00:30:45.328 12763.780 - 12821.017: 97.0074% ( 9) 00:30:45.328 12821.017 - 12878.253: 97.0471% ( 6) 00:30:45.328 12878.253 - 12935.490: 97.0935% ( 7) 00:30:45.328 12935.490 - 12992.727: 97.1398% ( 7) 00:30:45.328 12992.727 - 13049.963: 97.1862% ( 7) 00:30:45.328 13049.963 - 13107.200: 97.2458% ( 9) 00:30:45.328 13107.200 - 13164.437: 97.2987% ( 8) 00:30:45.328 13164.437 - 13221.673: 97.3318% ( 5) 00:30:45.328 13221.673 - 13278.910: 97.3782% ( 7) 00:30:45.328 13278.910 - 13336.147: 97.4179% ( 6) 00:30:45.328 13336.147 - 13393.383: 97.4378% ( 3) 00:30:45.328 13393.383 - 13450.620: 97.4709% ( 5) 00:30:45.328 13450.620 - 13507.857: 97.4775% ( 1) 00:30:45.328 13507.857 - 13565.093: 97.4974% ( 3) 00:30:45.328 13565.093 - 13622.330: 97.5172% ( 3) 00:30:45.328 13622.330 - 13679.567: 97.5371% ( 3) 00:30:45.328 13679.567 - 13736.803: 97.5437% ( 1) 00:30:45.328 13736.803 - 13794.040: 97.5636% ( 3) 00:30:45.328 13794.040 - 13851.277: 97.5768% ( 2) 00:30:45.328 13851.277 - 13908.514: 97.5967% ( 3) 00:30:45.328 13908.514 - 13965.750: 97.6231% ( 4) 00:30:45.328 13965.750 - 14022.987: 97.6629% ( 6) 00:30:45.328 14022.987 - 14080.224: 97.6960% ( 5) 00:30:45.328 14080.224 - 14137.460: 97.7357% ( 6) 00:30:45.328 14137.460 - 14194.697: 97.7688% ( 5) 00:30:45.328 14194.697 - 14251.934: 97.8019% ( 5) 00:30:45.328 14251.934 - 14309.170: 97.8416% ( 6) 00:30:45.328 14309.170 - 14366.407: 97.8747% ( 5) 00:30:45.328 14366.407 - 14423.644: 97.9211% ( 7) 00:30:45.328 14423.644 - 14480.880: 97.9542% ( 5) 00:30:45.328 14480.880 - 14538.117: 97.9939% ( 6) 00:30:45.328 14538.117 - 14595.354: 98.0336% ( 6) 00:30:45.328 14595.354 - 14652.590: 98.0734% ( 6) 00:30:45.328 14652.590 - 14767.064: 98.1396% ( 10) 00:30:45.328 14767.064 - 14881.537: 98.2124% ( 11) 00:30:45.328 14881.537 - 14996.010: 98.3051% ( 14) 00:30:45.328 14996.010 - 15110.484: 98.3647% ( 9) 00:30:45.328 15110.484 - 15224.957: 98.4044% ( 6) 00:30:45.328 15224.957 - 15339.431: 98.4375% ( 5) 00:30:45.328 15339.431 - 15453.904: 98.4706% ( 5) 00:30:45.328 15453.904 - 15568.377: 98.5103% ( 6) 00:30:45.328 15568.377 - 15682.851: 98.5434% ( 5) 00:30:45.328 15682.851 - 15797.324: 98.5832% ( 6) 00:30:45.328 15797.324 - 15911.797: 98.6229% ( 6) 00:30:45.328 15911.797 - 16026.271: 98.6957% ( 11) 00:30:45.328 16026.271 - 16140.744: 98.7685% ( 11) 00:30:45.328 16140.744 - 16255.217: 98.8480% ( 12) 00:30:45.328 16255.217 - 16369.691: 98.8811% ( 5) 00:30:45.328 16369.691 - 16484.164: 98.9142% ( 5) 00:30:45.328 16484.164 - 16598.638: 98.9539% ( 6) 00:30:45.328 16598.638 - 16713.111: 98.9936% ( 6) 00:30:45.328 16713.111 - 16827.584: 99.0334% ( 6) 00:30:45.328 16827.584 - 16942.058: 99.0599% ( 4) 00:30:45.328 16942.058 - 17056.531: 99.0996% ( 6) 00:30:45.328 17056.531 - 17171.004: 99.1393% ( 6) 00:30:45.328 17171.004 - 17285.478: 99.1525% ( 2) 00:30:45.328 40523.570 - 40752.517: 99.1592% ( 1) 00:30:45.328 40752.517 - 40981.464: 99.1989% ( 6) 00:30:45.328 40981.464 - 41210.410: 99.2320% ( 5) 00:30:45.328 41210.410 - 41439.357: 99.2717% ( 6) 00:30:45.328 41439.357 - 41668.304: 99.3181% ( 7) 00:30:45.328 41668.304 - 41897.251: 99.3512% ( 5) 00:30:45.328 41897.251 - 42126.197: 99.3909% ( 6) 00:30:45.328 42126.197 - 42355.144: 99.4306% ( 6) 00:30:45.328 42355.144 - 42584.091: 99.4770% ( 7) 00:30:45.328 42584.091 - 42813.038: 99.5167% ( 6) 00:30:45.328 42813.038 - 43041.984: 99.5498% ( 5) 00:30:45.328 43041.984 - 43270.931: 99.5763% ( 4) 00:30:45.328 47163.025 - 47391.972: 99.5961% ( 3) 00:30:45.328 47391.972 - 47620.919: 99.6359% ( 6) 00:30:45.328 47620.919 - 47849.866: 99.6756% ( 6) 00:30:45.328 47849.866 - 48078.812: 99.7153% ( 6) 00:30:45.328 48078.812 - 48307.759: 99.7550% ( 6) 00:30:45.328 48307.759 - 48536.706: 99.7881% ( 5) 00:30:45.328 48536.706 - 48765.652: 99.8212% ( 5) 00:30:45.328 48765.652 - 48994.599: 99.8477% ( 4) 00:30:45.328 48994.599 - 49223.546: 99.8742% ( 4) 00:30:45.328 49223.546 - 49452.493: 99.9073% ( 5) 00:30:45.328 49452.493 - 49681.439: 99.9338% ( 4) 00:30:45.328 49681.439 - 49910.386: 99.9669% ( 5) 00:30:45.328 49910.386 - 50139.333: 99.9934% ( 4) 00:30:45.328 50139.333 - 50368.279: 100.0000% ( 1) 00:30:45.328 00:30:45.328 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:45.328 ============================================================================== 00:30:45.328 Range in us Cumulative IO count 00:30:45.328 6782.547 - 6811.165: 0.0132% ( 2) 00:30:45.328 6811.165 - 6839.783: 0.0331% ( 3) 00:30:45.328 6839.783 - 6868.402: 0.1192% ( 13) 00:30:45.328 6868.402 - 6897.020: 0.2317% ( 17) 00:30:45.328 6897.020 - 6925.638: 0.3708% ( 21) 00:30:45.328 6925.638 - 6954.257: 0.6356% ( 40) 00:30:45.328 6954.257 - 6982.875: 0.9931% ( 54) 00:30:45.328 6982.875 - 7011.493: 1.4036% ( 62) 00:30:45.328 7011.493 - 7040.112: 1.8737% ( 71) 00:30:45.328 7040.112 - 7068.730: 2.4232% ( 83) 00:30:45.328 7068.730 - 7097.348: 2.9131% ( 74) 00:30:45.328 7097.348 - 7125.967: 3.5686% ( 99) 00:30:45.328 7125.967 - 7154.585: 4.2108% ( 97) 00:30:45.328 7154.585 - 7183.203: 4.9126% ( 106) 00:30:45.328 7183.203 - 7211.822: 5.6872% ( 117) 00:30:45.328 7211.822 - 7240.440: 6.6075% ( 139) 00:30:45.328 7240.440 - 7269.059: 7.5410% ( 141) 00:30:45.328 7269.059 - 7297.677: 8.4415% ( 136) 00:30:45.328 7297.677 - 7326.295: 9.5273% ( 164) 00:30:45.328 7326.295 - 7383.532: 11.7916% ( 342) 00:30:45.328 7383.532 - 7440.769: 14.4399% ( 400) 00:30:45.328 7440.769 - 7498.005: 17.4325% ( 452) 00:30:45.329 7498.005 - 7555.242: 20.9017% ( 524) 00:30:45.329 7555.242 - 7612.479: 24.7418% ( 580) 00:30:45.329 7612.479 - 7669.715: 28.9592% ( 637) 00:30:45.329 7669.715 - 7726.952: 33.3157% ( 658) 00:30:45.329 7726.952 - 7784.189: 37.9502% ( 700) 00:30:45.329 7784.189 - 7841.425: 42.4523% ( 680) 00:30:45.329 7841.425 - 7898.662: 46.8816% ( 669) 00:30:45.329 7898.662 - 7955.899: 51.1653% ( 647) 00:30:45.329 7955.899 - 8013.135: 55.1245% ( 598) 00:30:45.329 8013.135 - 8070.372: 59.0373% ( 591) 00:30:45.329 8070.372 - 8127.609: 62.7847% ( 566) 00:30:45.329 8127.609 - 8184.845: 66.2474% ( 523) 00:30:45.329 8184.845 - 8242.082: 69.4452% ( 483) 00:30:45.329 8242.082 - 8299.319: 72.4709% ( 457) 00:30:45.329 8299.319 - 8356.555: 75.2648% ( 422) 00:30:45.329 8356.555 - 8413.792: 77.9396% ( 404) 00:30:45.329 8413.792 - 8471.029: 80.4290% ( 376) 00:30:45.329 8471.029 - 8528.266: 82.6271% ( 332) 00:30:45.329 8528.266 - 8585.502: 84.4346% ( 273) 00:30:45.329 8585.502 - 8642.739: 85.8448% ( 213) 00:30:45.329 8642.739 - 8699.976: 86.8843% ( 157) 00:30:45.329 8699.976 - 8757.212: 87.6324% ( 113) 00:30:45.329 8757.212 - 8814.449: 88.1687% ( 81) 00:30:45.329 8814.449 - 8871.686: 88.6123% ( 67) 00:30:45.329 8871.686 - 8928.922: 89.0493% ( 66) 00:30:45.329 8928.922 - 8986.159: 89.4134% ( 55) 00:30:45.329 8986.159 - 9043.396: 89.8106% ( 60) 00:30:45.329 9043.396 - 9100.632: 90.1682% ( 54) 00:30:45.329 9100.632 - 9157.869: 90.5456% ( 57) 00:30:45.329 9157.869 - 9215.106: 90.8965% ( 53) 00:30:45.329 9215.106 - 9272.342: 91.2341% ( 51) 00:30:45.329 9272.342 - 9329.579: 91.5453% ( 47) 00:30:45.329 9329.579 - 9386.816: 91.8101% ( 40) 00:30:45.329 9386.816 - 9444.052: 92.0683% ( 39) 00:30:45.329 9444.052 - 9501.289: 92.3133% ( 37) 00:30:45.329 9501.289 - 9558.526: 92.5318% ( 33) 00:30:45.329 9558.526 - 9615.762: 92.7370% ( 31) 00:30:45.329 9615.762 - 9672.999: 92.9423% ( 31) 00:30:45.329 9672.999 - 9730.236: 93.1872% ( 37) 00:30:45.329 9730.236 - 9787.472: 93.3925% ( 31) 00:30:45.329 9787.472 - 9844.709: 93.5712% ( 27) 00:30:45.329 9844.709 - 9901.946: 93.7566% ( 28) 00:30:45.329 9901.946 - 9959.183: 93.8957% ( 21) 00:30:45.329 9959.183 - 10016.419: 94.0413% ( 22) 00:30:45.329 10016.419 - 10073.656: 94.1406% ( 15) 00:30:45.329 10073.656 - 10130.893: 94.2598% ( 18) 00:30:45.329 10130.893 - 10188.129: 94.3790% ( 18) 00:30:45.329 10188.129 - 10245.366: 94.4915% ( 17) 00:30:45.329 10245.366 - 10302.603: 94.6107% ( 18) 00:30:45.329 10302.603 - 10359.839: 94.7034% ( 14) 00:30:45.329 10359.839 - 10417.076: 94.7895% ( 13) 00:30:45.329 10417.076 - 10474.313: 94.8822% ( 14) 00:30:45.329 10474.313 - 10531.549: 94.9616% ( 12) 00:30:45.329 10531.549 - 10588.786: 95.0543% ( 14) 00:30:45.329 10588.786 - 10646.023: 95.1470% ( 14) 00:30:45.329 10646.023 - 10703.259: 95.2397% ( 14) 00:30:45.329 10703.259 - 10760.496: 95.3191% ( 12) 00:30:45.329 10760.496 - 10817.733: 95.4052% ( 13) 00:30:45.329 10817.733 - 10874.969: 95.4780% ( 11) 00:30:45.329 10874.969 - 10932.206: 95.5442% ( 10) 00:30:45.329 10932.206 - 10989.443: 95.6237% ( 12) 00:30:45.329 10989.443 - 11046.679: 95.6965% ( 11) 00:30:45.329 11046.679 - 11103.916: 95.7627% ( 10) 00:30:45.329 11103.916 - 11161.153: 95.8157% ( 8) 00:30:45.329 11161.153 - 11218.390: 95.8554% ( 6) 00:30:45.329 11218.390 - 11275.626: 95.8885% ( 5) 00:30:45.329 11275.626 - 11332.863: 95.9017% ( 2) 00:30:45.329 11332.863 - 11390.100: 95.9216% ( 3) 00:30:45.329 11390.100 - 11447.336: 95.9415% ( 3) 00:30:45.329 11447.336 - 11504.573: 95.9547% ( 2) 00:30:45.329 11504.573 - 11561.810: 95.9680% ( 2) 00:30:45.329 11561.810 - 11619.046: 95.9878% ( 3) 00:30:45.329 11619.046 - 11676.283: 96.0077% ( 3) 00:30:45.329 11676.283 - 11733.520: 96.0342% ( 4) 00:30:45.329 11733.520 - 11790.756: 96.0540% ( 3) 00:30:45.329 11790.756 - 11847.993: 96.0739% ( 3) 00:30:45.329 11847.993 - 11905.230: 96.1136% ( 6) 00:30:45.329 11905.230 - 11962.466: 96.1600% ( 7) 00:30:45.329 11962.466 - 12019.703: 96.1997% ( 6) 00:30:45.329 12019.703 - 12076.940: 96.2460% ( 7) 00:30:45.329 12076.940 - 12134.176: 96.2924% ( 7) 00:30:45.329 12134.176 - 12191.413: 96.3453% ( 8) 00:30:45.329 12191.413 - 12248.650: 96.3784% ( 5) 00:30:45.329 12248.650 - 12305.886: 96.4115% ( 5) 00:30:45.329 12305.886 - 12363.123: 96.4513% ( 6) 00:30:45.329 12363.123 - 12420.360: 96.4844% ( 5) 00:30:45.329 12420.360 - 12477.597: 96.5307% ( 7) 00:30:45.329 12477.597 - 12534.833: 96.5638% ( 5) 00:30:45.329 12534.833 - 12592.070: 96.5903% ( 4) 00:30:45.329 12592.070 - 12649.307: 96.6234% ( 5) 00:30:45.329 12649.307 - 12706.543: 96.6698% ( 7) 00:30:45.329 12706.543 - 12763.780: 96.7293% ( 9) 00:30:45.329 12763.780 - 12821.017: 96.7889% ( 9) 00:30:45.329 12821.017 - 12878.253: 96.8419% ( 8) 00:30:45.329 12878.253 - 12935.490: 96.9015% ( 9) 00:30:45.329 12935.490 - 12992.727: 96.9478% ( 7) 00:30:45.329 12992.727 - 13049.963: 97.0207% ( 11) 00:30:45.329 13049.963 - 13107.200: 97.0405% ( 3) 00:30:45.329 13107.200 - 13164.437: 97.0802% ( 6) 00:30:45.329 13164.437 - 13221.673: 97.1200% ( 6) 00:30:45.329 13221.673 - 13278.910: 97.1663% ( 7) 00:30:45.329 13278.910 - 13336.147: 97.2193% ( 8) 00:30:45.329 13336.147 - 13393.383: 97.2590% ( 6) 00:30:45.329 13393.383 - 13450.620: 97.3186% ( 9) 00:30:45.329 13450.620 - 13507.857: 97.3782% ( 9) 00:30:45.329 13507.857 - 13565.093: 97.4311% ( 8) 00:30:45.329 13565.093 - 13622.330: 97.4841% ( 8) 00:30:45.329 13622.330 - 13679.567: 97.5238% ( 6) 00:30:45.329 13679.567 - 13736.803: 97.5900% ( 10) 00:30:45.329 13736.803 - 13794.040: 97.6231% ( 5) 00:30:45.329 13794.040 - 13851.277: 97.6496% ( 4) 00:30:45.329 13851.277 - 13908.514: 97.6695% ( 3) 00:30:45.329 13908.514 - 13965.750: 97.6894% ( 3) 00:30:45.329 13965.750 - 14022.987: 97.7092% ( 3) 00:30:45.329 14022.987 - 14080.224: 97.7291% ( 3) 00:30:45.329 14080.224 - 14137.460: 97.7489% ( 3) 00:30:45.329 14137.460 - 14194.697: 97.7688% ( 3) 00:30:45.329 14194.697 - 14251.934: 97.7887% ( 3) 00:30:45.329 14251.934 - 14309.170: 97.8019% ( 2) 00:30:45.329 14309.170 - 14366.407: 97.8218% ( 3) 00:30:45.329 14366.407 - 14423.644: 97.8416% ( 3) 00:30:45.329 14423.644 - 14480.880: 97.8615% ( 3) 00:30:45.329 14480.880 - 14538.117: 97.8814% ( 3) 00:30:45.329 14595.354 - 14652.590: 97.9211% ( 6) 00:30:45.329 14652.590 - 14767.064: 98.0204% ( 15) 00:30:45.329 14767.064 - 14881.537: 98.1329% ( 17) 00:30:45.329 14881.537 - 14996.010: 98.2124% ( 12) 00:30:45.329 14996.010 - 15110.484: 98.3051% ( 14) 00:30:45.329 15110.484 - 15224.957: 98.3978% ( 14) 00:30:45.329 15224.957 - 15339.431: 98.4971% ( 15) 00:30:45.329 15339.431 - 15453.904: 98.5832% ( 13) 00:30:45.329 15453.904 - 15568.377: 98.6626% ( 12) 00:30:45.329 15568.377 - 15682.851: 98.7090% ( 7) 00:30:45.329 15682.851 - 15797.324: 98.7288% ( 3) 00:30:45.329 16369.691 - 16484.164: 98.7421% ( 2) 00:30:45.329 16484.164 - 16598.638: 98.7950% ( 8) 00:30:45.329 16598.638 - 16713.111: 98.8347% ( 6) 00:30:45.329 16713.111 - 16827.584: 98.8811% ( 7) 00:30:45.329 16827.584 - 16942.058: 98.9274% ( 7) 00:30:45.329 16942.058 - 17056.531: 98.9672% ( 6) 00:30:45.329 17056.531 - 17171.004: 99.0135% ( 7) 00:30:45.329 17171.004 - 17285.478: 99.0532% ( 6) 00:30:45.329 17285.478 - 17399.951: 99.0996% ( 7) 00:30:45.329 17399.951 - 17514.424: 99.1459% ( 7) 00:30:45.329 17514.424 - 17628.898: 99.1525% ( 1) 00:30:45.329 37547.263 - 37776.210: 99.1592% ( 1) 00:30:45.329 37776.210 - 38005.156: 99.1989% ( 6) 00:30:45.329 38005.156 - 38234.103: 99.2386% ( 6) 00:30:45.329 38234.103 - 38463.050: 99.2717% ( 5) 00:30:45.329 38463.050 - 38691.997: 99.3181% ( 7) 00:30:45.329 38691.997 - 38920.943: 99.3644% ( 7) 00:30:45.329 38920.943 - 39149.890: 99.3975% ( 5) 00:30:45.329 39149.890 - 39378.837: 99.4372% ( 6) 00:30:45.329 39378.837 - 39607.783: 99.4836% ( 7) 00:30:45.329 39607.783 - 39836.730: 99.5233% ( 6) 00:30:45.329 39836.730 - 40065.677: 99.5630% ( 6) 00:30:45.329 40065.677 - 40294.624: 99.5763% ( 2) 00:30:45.329 44186.718 - 44415.665: 99.6094% ( 5) 00:30:45.329 44415.665 - 44644.611: 99.6557% ( 7) 00:30:45.329 44644.611 - 44873.558: 99.6822% ( 4) 00:30:45.329 44873.558 - 45102.505: 99.7285% ( 7) 00:30:45.329 45102.505 - 45331.452: 99.7749% ( 7) 00:30:45.329 45331.452 - 45560.398: 99.8212% ( 7) 00:30:45.329 45560.398 - 45789.345: 99.8610% ( 6) 00:30:45.329 45789.345 - 46018.292: 99.9073% ( 7) 00:30:45.329 46018.292 - 46247.238: 99.9404% ( 5) 00:30:45.329 46247.238 - 46476.185: 99.9868% ( 7) 00:30:45.329 46476.185 - 46705.132: 100.0000% ( 2) 00:30:45.329 00:30:45.329 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:45.329 ============================================================================== 00:30:45.329 Range in us Cumulative IO count 00:30:45.329 6811.165 - 6839.783: 0.0199% ( 3) 00:30:45.329 6839.783 - 6868.402: 0.0397% ( 3) 00:30:45.329 6868.402 - 6897.020: 0.1324% ( 14) 00:30:45.329 6897.020 - 6925.638: 0.2847% ( 23) 00:30:45.329 6925.638 - 6954.257: 0.5495% ( 40) 00:30:45.329 6954.257 - 6982.875: 0.8475% ( 45) 00:30:45.329 6982.875 - 7011.493: 1.2447% ( 60) 00:30:45.329 7011.493 - 7040.112: 1.7545% ( 77) 00:30:45.329 7040.112 - 7068.730: 2.3040% ( 83) 00:30:45.329 7068.730 - 7097.348: 2.8933% ( 89) 00:30:45.329 7097.348 - 7125.967: 3.5553% ( 100) 00:30:45.329 7125.967 - 7154.585: 4.2439% ( 104) 00:30:45.329 7154.585 - 7183.203: 4.9325% ( 104) 00:30:45.330 7183.203 - 7211.822: 5.7468% ( 123) 00:30:45.330 7211.822 - 7240.440: 6.5546% ( 122) 00:30:45.330 7240.440 - 7269.059: 7.4881% ( 141) 00:30:45.330 7269.059 - 7297.677: 8.4944% ( 152) 00:30:45.330 7297.677 - 7326.295: 9.5538% ( 160) 00:30:45.330 7326.295 - 7383.532: 11.7585% ( 333) 00:30:45.330 7383.532 - 7440.769: 14.3671% ( 394) 00:30:45.330 7440.769 - 7498.005: 17.5914% ( 487) 00:30:45.330 7498.005 - 7555.242: 21.1401% ( 536) 00:30:45.330 7555.242 - 7612.479: 25.0265% ( 587) 00:30:45.330 7612.479 - 7669.715: 29.2969% ( 645) 00:30:45.330 7669.715 - 7726.952: 33.6931% ( 664) 00:30:45.330 7726.952 - 7784.189: 38.3541% ( 704) 00:30:45.330 7784.189 - 7841.425: 42.9025% ( 687) 00:30:45.330 7841.425 - 7898.662: 47.3120% ( 666) 00:30:45.330 7898.662 - 7955.899: 51.6221% ( 651) 00:30:45.330 7955.899 - 8013.135: 55.7402% ( 622) 00:30:45.330 8013.135 - 8070.372: 59.6729% ( 594) 00:30:45.330 8070.372 - 8127.609: 63.2945% ( 547) 00:30:45.330 8127.609 - 8184.845: 66.7042% ( 515) 00:30:45.330 8184.845 - 8242.082: 69.8093% ( 469) 00:30:45.330 8242.082 - 8299.319: 72.8681% ( 462) 00:30:45.330 8299.319 - 8356.555: 75.8077% ( 444) 00:30:45.330 8356.555 - 8413.792: 78.4428% ( 398) 00:30:45.330 8413.792 - 8471.029: 80.9388% ( 377) 00:30:45.330 8471.029 - 8528.266: 82.9582% ( 305) 00:30:45.330 8528.266 - 8585.502: 84.5935% ( 247) 00:30:45.330 8585.502 - 8642.739: 85.9110% ( 199) 00:30:45.330 8642.739 - 8699.976: 86.7850% ( 132) 00:30:45.330 8699.976 - 8757.212: 87.5000% ( 108) 00:30:45.330 8757.212 - 8814.449: 88.0164% ( 78) 00:30:45.330 8814.449 - 8871.686: 88.4666% ( 68) 00:30:45.330 8871.686 - 8928.922: 88.8308% ( 55) 00:30:45.330 8928.922 - 8986.159: 89.1353% ( 46) 00:30:45.330 8986.159 - 9043.396: 89.4664% ( 50) 00:30:45.330 9043.396 - 9100.632: 89.7908% ( 49) 00:30:45.330 9100.632 - 9157.869: 90.1152% ( 49) 00:30:45.330 9157.869 - 9215.106: 90.4198% ( 46) 00:30:45.330 9215.106 - 9272.342: 90.7243% ( 46) 00:30:45.330 9272.342 - 9329.579: 90.9627% ( 36) 00:30:45.330 9329.579 - 9386.816: 91.2209% ( 39) 00:30:45.330 9386.816 - 9444.052: 91.4460% ( 34) 00:30:45.330 9444.052 - 9501.289: 91.6578% ( 32) 00:30:45.330 9501.289 - 9558.526: 91.8697% ( 32) 00:30:45.330 9558.526 - 9615.762: 92.1081% ( 36) 00:30:45.330 9615.762 - 9672.999: 92.3398% ( 35) 00:30:45.330 9672.999 - 9730.236: 92.5384% ( 30) 00:30:45.330 9730.236 - 9787.472: 92.7701% ( 35) 00:30:45.330 9787.472 - 9844.709: 93.0019% ( 35) 00:30:45.330 9844.709 - 9901.946: 93.2005% ( 30) 00:30:45.330 9901.946 - 9959.183: 93.4190% ( 33) 00:30:45.330 9959.183 - 10016.419: 93.6110% ( 29) 00:30:45.330 10016.419 - 10073.656: 93.7963% ( 28) 00:30:45.330 10073.656 - 10130.893: 93.9817% ( 28) 00:30:45.330 10130.893 - 10188.129: 94.1141% ( 20) 00:30:45.330 10188.129 - 10245.366: 94.2730% ( 24) 00:30:45.330 10245.366 - 10302.603: 94.4121% ( 21) 00:30:45.330 10302.603 - 10359.839: 94.5710% ( 24) 00:30:45.330 10359.839 - 10417.076: 94.7166% ( 22) 00:30:45.330 10417.076 - 10474.313: 94.8490% ( 20) 00:30:45.330 10474.313 - 10531.549: 94.9682% ( 18) 00:30:45.330 10531.549 - 10588.786: 95.0742% ( 16) 00:30:45.330 10588.786 - 10646.023: 95.1867% ( 17) 00:30:45.330 10646.023 - 10703.259: 95.2860% ( 15) 00:30:45.330 10703.259 - 10760.496: 95.3787% ( 14) 00:30:45.330 10760.496 - 10817.733: 95.4648% ( 13) 00:30:45.330 10817.733 - 10874.969: 95.5045% ( 6) 00:30:45.330 10874.969 - 10932.206: 95.5310% ( 4) 00:30:45.330 10932.206 - 10989.443: 95.5508% ( 3) 00:30:45.330 10989.443 - 11046.679: 95.5707% ( 3) 00:30:45.330 11046.679 - 11103.916: 95.6038% ( 5) 00:30:45.330 11103.916 - 11161.153: 95.6369% ( 5) 00:30:45.330 11161.153 - 11218.390: 95.6766% ( 6) 00:30:45.330 11218.390 - 11275.626: 95.7164% ( 6) 00:30:45.330 11275.626 - 11332.863: 95.7561% ( 6) 00:30:45.330 11332.863 - 11390.100: 95.7958% ( 6) 00:30:45.330 11390.100 - 11447.336: 95.8289% ( 5) 00:30:45.330 11447.336 - 11504.573: 95.8620% ( 5) 00:30:45.330 11504.573 - 11561.810: 95.9017% ( 6) 00:30:45.330 11561.810 - 11619.046: 95.9680% ( 10) 00:30:45.330 11619.046 - 11676.283: 96.0209% ( 8) 00:30:45.330 11676.283 - 11733.520: 96.0606% ( 6) 00:30:45.330 11733.520 - 11790.756: 96.1070% ( 7) 00:30:45.330 11790.756 - 11847.993: 96.1467% ( 6) 00:30:45.330 11847.993 - 11905.230: 96.1931% ( 7) 00:30:45.330 11905.230 - 11962.466: 96.2460% ( 8) 00:30:45.330 11962.466 - 12019.703: 96.2858% ( 6) 00:30:45.330 12019.703 - 12076.940: 96.3321% ( 7) 00:30:45.330 12076.940 - 12134.176: 96.3784% ( 7) 00:30:45.330 12134.176 - 12191.413: 96.4182% ( 6) 00:30:45.330 12191.413 - 12248.650: 96.4645% ( 7) 00:30:45.330 12248.650 - 12305.886: 96.5109% ( 7) 00:30:45.330 12305.886 - 12363.123: 96.5638% ( 8) 00:30:45.330 12363.123 - 12420.360: 96.6035% ( 6) 00:30:45.330 12420.360 - 12477.597: 96.6499% ( 7) 00:30:45.330 12477.597 - 12534.833: 96.6764% ( 4) 00:30:45.330 12534.833 - 12592.070: 96.6896% ( 2) 00:30:45.330 12592.070 - 12649.307: 96.6962% ( 1) 00:30:45.330 12649.307 - 12706.543: 96.7095% ( 2) 00:30:45.330 12706.543 - 12763.780: 96.7492% ( 6) 00:30:45.330 12763.780 - 12821.017: 96.7823% ( 5) 00:30:45.330 12821.017 - 12878.253: 96.8154% ( 5) 00:30:45.330 12878.253 - 12935.490: 96.8485% ( 5) 00:30:45.330 12935.490 - 12992.727: 96.8750% ( 4) 00:30:45.330 12992.727 - 13049.963: 96.9081% ( 5) 00:30:45.330 13049.963 - 13107.200: 96.9412% ( 5) 00:30:45.330 13107.200 - 13164.437: 96.9677% ( 4) 00:30:45.330 13164.437 - 13221.673: 97.0008% ( 5) 00:30:45.330 13221.673 - 13278.910: 97.0339% ( 5) 00:30:45.330 13278.910 - 13336.147: 97.0670% ( 5) 00:30:45.330 13336.147 - 13393.383: 97.1001% ( 5) 00:30:45.330 13393.383 - 13450.620: 97.1398% ( 6) 00:30:45.330 13450.620 - 13507.857: 97.1663% ( 4) 00:30:45.330 13507.857 - 13565.093: 97.1994% ( 5) 00:30:45.330 13565.093 - 13622.330: 97.2325% ( 5) 00:30:45.330 13622.330 - 13679.567: 97.2722% ( 6) 00:30:45.330 13679.567 - 13736.803: 97.3517% ( 12) 00:30:45.330 13736.803 - 13794.040: 97.4311% ( 12) 00:30:45.330 13794.040 - 13851.277: 97.5040% ( 11) 00:30:45.330 13851.277 - 13908.514: 97.5834% ( 12) 00:30:45.330 13908.514 - 13965.750: 97.6562% ( 11) 00:30:45.330 13965.750 - 14022.987: 97.7291% ( 11) 00:30:45.330 14022.987 - 14080.224: 97.7820% ( 8) 00:30:45.330 14080.224 - 14137.460: 97.8218% ( 6) 00:30:45.330 14137.460 - 14194.697: 97.8747% ( 8) 00:30:45.330 14194.697 - 14251.934: 97.9145% ( 6) 00:30:45.330 14251.934 - 14309.170: 97.9542% ( 6) 00:30:45.330 14309.170 - 14366.407: 98.0072% ( 8) 00:30:45.330 14366.407 - 14423.644: 98.0469% ( 6) 00:30:45.330 14423.644 - 14480.880: 98.0866% ( 6) 00:30:45.330 14480.880 - 14538.117: 98.1396% ( 8) 00:30:45.330 14538.117 - 14595.354: 98.1793% ( 6) 00:30:45.330 14595.354 - 14652.590: 98.2256% ( 7) 00:30:45.330 14652.590 - 14767.064: 98.2985% ( 11) 00:30:45.330 14767.064 - 14881.537: 98.3051% ( 1) 00:30:45.330 15339.431 - 15453.904: 98.3316% ( 4) 00:30:45.330 15453.904 - 15568.377: 98.3647% ( 5) 00:30:45.330 15568.377 - 15682.851: 98.4375% ( 11) 00:30:45.330 15682.851 - 15797.324: 98.5236% ( 13) 00:30:45.330 15797.324 - 15911.797: 98.6096% ( 13) 00:30:45.330 15911.797 - 16026.271: 98.6957% ( 13) 00:30:45.330 16026.271 - 16140.744: 98.7818% ( 13) 00:30:45.330 16140.744 - 16255.217: 98.8678% ( 13) 00:30:45.330 16255.217 - 16369.691: 98.9473% ( 12) 00:30:45.330 16369.691 - 16484.164: 99.0334% ( 13) 00:30:45.330 16484.164 - 16598.638: 99.1128% ( 12) 00:30:45.330 16598.638 - 16713.111: 99.1525% ( 6) 00:30:45.330 35028.849 - 35257.796: 99.1923% ( 6) 00:30:45.330 35257.796 - 35486.742: 99.2386% ( 7) 00:30:45.330 35486.742 - 35715.689: 99.2916% ( 8) 00:30:45.330 35715.689 - 35944.636: 99.3379% ( 7) 00:30:45.330 35944.636 - 36173.583: 99.3776% ( 6) 00:30:45.330 36173.583 - 36402.529: 99.4174% ( 6) 00:30:45.330 36402.529 - 36631.476: 99.4571% ( 6) 00:30:45.330 36631.476 - 36860.423: 99.4968% ( 6) 00:30:45.330 36860.423 - 37089.369: 99.5365% ( 6) 00:30:45.330 37089.369 - 37318.316: 99.5763% ( 6) 00:30:45.330 41210.410 - 41439.357: 99.5829% ( 1) 00:30:45.330 41439.357 - 41668.304: 99.6226% ( 6) 00:30:45.330 41668.304 - 41897.251: 99.6623% ( 6) 00:30:45.330 41897.251 - 42126.197: 99.6954% ( 5) 00:30:45.330 42126.197 - 42355.144: 99.7352% ( 6) 00:30:45.330 42355.144 - 42584.091: 99.7683% ( 5) 00:30:45.330 42584.091 - 42813.038: 99.8146% ( 7) 00:30:45.330 42813.038 - 43041.984: 99.8477% ( 5) 00:30:45.330 43041.984 - 43270.931: 99.8941% ( 7) 00:30:45.330 43270.931 - 43499.878: 99.9338% ( 6) 00:30:45.330 43499.878 - 43728.824: 99.9735% ( 6) 00:30:45.330 43728.824 - 43957.771: 100.0000% ( 4) 00:30:45.330 00:30:45.330 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:45.330 ============================================================================== 00:30:45.330 Range in us Cumulative IO count 00:30:45.330 6811.165 - 6839.783: 0.0132% ( 2) 00:30:45.330 6839.783 - 6868.402: 0.0331% ( 3) 00:30:45.330 6868.402 - 6897.020: 0.0927% ( 9) 00:30:45.330 6897.020 - 6925.638: 0.2516% ( 24) 00:30:45.330 6925.638 - 6954.257: 0.4502% ( 30) 00:30:45.330 6954.257 - 6982.875: 0.7680% ( 48) 00:30:45.330 6982.875 - 7011.493: 1.1719% ( 61) 00:30:45.330 7011.493 - 7040.112: 1.5890% ( 63) 00:30:45.330 7040.112 - 7068.730: 2.1319% ( 82) 00:30:45.330 7068.730 - 7097.348: 2.7410% ( 92) 00:30:45.330 7097.348 - 7125.967: 3.4163% ( 102) 00:30:45.330 7125.967 - 7154.585: 4.1049% ( 104) 00:30:45.330 7154.585 - 7183.203: 4.7868% ( 103) 00:30:45.330 7183.203 - 7211.822: 5.5151% ( 110) 00:30:45.330 7211.822 - 7240.440: 6.3493% ( 126) 00:30:45.330 7240.440 - 7269.059: 7.2630% ( 138) 00:30:45.331 7269.059 - 7297.677: 8.2892% ( 155) 00:30:45.331 7297.677 - 7326.295: 9.3353% ( 158) 00:30:45.331 7326.295 - 7383.532: 11.5665% ( 337) 00:30:45.331 7383.532 - 7440.769: 14.1618% ( 392) 00:30:45.331 7440.769 - 7498.005: 17.3994% ( 489) 00:30:45.331 7498.005 - 7555.242: 20.7693% ( 509) 00:30:45.331 7555.242 - 7612.479: 24.6557% ( 587) 00:30:45.331 7612.479 - 7669.715: 28.9526% ( 649) 00:30:45.331 7669.715 - 7726.952: 33.4481% ( 679) 00:30:45.331 7726.952 - 7784.189: 38.2217% ( 721) 00:30:45.331 7784.189 - 7841.425: 42.8099% ( 693) 00:30:45.331 7841.425 - 7898.662: 47.3914% ( 692) 00:30:45.331 7898.662 - 7955.899: 51.6353% ( 641) 00:30:45.331 7955.899 - 8013.135: 55.8726% ( 640) 00:30:45.331 8013.135 - 8070.372: 59.8716% ( 604) 00:30:45.331 8070.372 - 8127.609: 63.5659% ( 558) 00:30:45.331 8127.609 - 8184.845: 66.8763% ( 500) 00:30:45.331 8184.845 - 8242.082: 70.0874% ( 485) 00:30:45.331 8242.082 - 8299.319: 73.1528% ( 463) 00:30:45.331 8299.319 - 8356.555: 76.0659% ( 440) 00:30:45.331 8356.555 - 8413.792: 78.7672% ( 408) 00:30:45.331 8413.792 - 8471.029: 81.2632% ( 377) 00:30:45.331 8471.029 - 8528.266: 83.2296% ( 297) 00:30:45.331 8528.266 - 8585.502: 84.8318% ( 242) 00:30:45.331 8585.502 - 8642.739: 86.1560% ( 200) 00:30:45.331 8642.739 - 8699.976: 87.1557% ( 151) 00:30:45.331 8699.976 - 8757.212: 87.8112% ( 99) 00:30:45.331 8757.212 - 8814.449: 88.2812% ( 71) 00:30:45.331 8814.449 - 8871.686: 88.6520% ( 56) 00:30:45.331 8871.686 - 8928.922: 89.0162% ( 55) 00:30:45.331 8928.922 - 8986.159: 89.3008% ( 43) 00:30:45.331 8986.159 - 9043.396: 89.5723% ( 41) 00:30:45.331 9043.396 - 9100.632: 89.8239% ( 38) 00:30:45.331 9100.632 - 9157.869: 90.1020% ( 42) 00:30:45.331 9157.869 - 9215.106: 90.4065% ( 46) 00:30:45.331 9215.106 - 9272.342: 90.7243% ( 48) 00:30:45.331 9272.342 - 9329.579: 91.0156% ( 44) 00:30:45.331 9329.579 - 9386.816: 91.2407% ( 34) 00:30:45.331 9386.816 - 9444.052: 91.4725% ( 35) 00:30:45.331 9444.052 - 9501.289: 91.6578% ( 28) 00:30:45.331 9501.289 - 9558.526: 91.9028% ( 37) 00:30:45.331 9558.526 - 9615.762: 92.1014% ( 30) 00:30:45.331 9615.762 - 9672.999: 92.3067% ( 31) 00:30:45.331 9672.999 - 9730.236: 92.4987% ( 29) 00:30:45.331 9730.236 - 9787.472: 92.6841% ( 28) 00:30:45.331 9787.472 - 9844.709: 92.8628% ( 27) 00:30:45.331 9844.709 - 9901.946: 93.0283% ( 25) 00:30:45.331 9901.946 - 9959.183: 93.2137% ( 28) 00:30:45.331 9959.183 - 10016.419: 93.4057% ( 29) 00:30:45.331 10016.419 - 10073.656: 93.5779% ( 26) 00:30:45.331 10073.656 - 10130.893: 93.7169% ( 21) 00:30:45.331 10130.893 - 10188.129: 93.9155% ( 30) 00:30:45.331 10188.129 - 10245.366: 94.0744% ( 24) 00:30:45.331 10245.366 - 10302.603: 94.2135% ( 21) 00:30:45.331 10302.603 - 10359.839: 94.3525% ( 21) 00:30:45.331 10359.839 - 10417.076: 94.5048% ( 23) 00:30:45.331 10417.076 - 10474.313: 94.6703% ( 25) 00:30:45.331 10474.313 - 10531.549: 94.8292% ( 24) 00:30:45.331 10531.549 - 10588.786: 94.9748% ( 22) 00:30:45.331 10588.786 - 10646.023: 95.1073% ( 20) 00:30:45.331 10646.023 - 10703.259: 95.2463% ( 21) 00:30:45.331 10703.259 - 10760.496: 95.3853% ( 21) 00:30:45.331 10760.496 - 10817.733: 95.5442% ( 24) 00:30:45.331 10817.733 - 10874.969: 95.6237% ( 12) 00:30:45.331 10874.969 - 10932.206: 95.7031% ( 12) 00:30:45.331 10932.206 - 10989.443: 95.7495% ( 7) 00:30:45.331 10989.443 - 11046.679: 95.7826% ( 5) 00:30:45.331 11046.679 - 11103.916: 95.8223% ( 6) 00:30:45.331 11103.916 - 11161.153: 95.8554% ( 5) 00:30:45.331 11161.153 - 11218.390: 95.8885% ( 5) 00:30:45.331 11218.390 - 11275.626: 95.9282% ( 6) 00:30:45.331 11275.626 - 11332.863: 95.9613% ( 5) 00:30:45.331 11332.863 - 11390.100: 96.0011% ( 6) 00:30:45.331 11390.100 - 11447.336: 96.0408% ( 6) 00:30:45.331 11447.336 - 11504.573: 96.0805% ( 6) 00:30:45.331 11504.573 - 11561.810: 96.1004% ( 3) 00:30:45.331 11561.810 - 11619.046: 96.1136% ( 2) 00:30:45.331 11619.046 - 11676.283: 96.1335% ( 3) 00:30:45.331 11676.283 - 11733.520: 96.1533% ( 3) 00:30:45.331 11733.520 - 11790.756: 96.1666% ( 2) 00:30:45.331 11790.756 - 11847.993: 96.1864% ( 3) 00:30:45.331 12191.413 - 12248.650: 96.2129% ( 4) 00:30:45.331 12248.650 - 12305.886: 96.2394% ( 4) 00:30:45.331 12305.886 - 12363.123: 96.2659% ( 4) 00:30:45.331 12363.123 - 12420.360: 96.3189% ( 8) 00:30:45.331 12420.360 - 12477.597: 96.3718% ( 8) 00:30:45.331 12477.597 - 12534.833: 96.4049% ( 5) 00:30:45.331 12534.833 - 12592.070: 96.4513% ( 7) 00:30:45.331 12592.070 - 12649.307: 96.4910% ( 6) 00:30:45.331 12649.307 - 12706.543: 96.5373% ( 7) 00:30:45.331 12706.543 - 12763.780: 96.5837% ( 7) 00:30:45.331 12763.780 - 12821.017: 96.6234% ( 6) 00:30:45.331 12821.017 - 12878.253: 96.6698% ( 7) 00:30:45.331 12878.253 - 12935.490: 96.7161% ( 7) 00:30:45.331 12935.490 - 12992.727: 96.7624% ( 7) 00:30:45.331 12992.727 - 13049.963: 96.8220% ( 9) 00:30:45.331 13049.963 - 13107.200: 96.8816% ( 9) 00:30:45.331 13107.200 - 13164.437: 96.9280% ( 7) 00:30:45.331 13164.437 - 13221.673: 96.9611% ( 5) 00:30:45.331 13221.673 - 13278.910: 97.0207% ( 9) 00:30:45.331 13278.910 - 13336.147: 97.0802% ( 9) 00:30:45.331 13336.147 - 13393.383: 97.1266% ( 7) 00:30:45.331 13393.383 - 13450.620: 97.1862% ( 9) 00:30:45.331 13450.620 - 13507.857: 97.2325% ( 7) 00:30:45.331 13507.857 - 13565.093: 97.2855% ( 8) 00:30:45.331 13565.093 - 13622.330: 97.3517% ( 10) 00:30:45.331 13622.330 - 13679.567: 97.3848% ( 5) 00:30:45.331 13679.567 - 13736.803: 97.4179% ( 5) 00:30:45.331 13736.803 - 13794.040: 97.4576% ( 6) 00:30:45.331 13794.040 - 13851.277: 97.4841% ( 4) 00:30:45.331 13851.277 - 13908.514: 97.5172% ( 5) 00:30:45.331 13908.514 - 13965.750: 97.5569% ( 6) 00:30:45.331 13965.750 - 14022.987: 97.5900% ( 5) 00:30:45.331 14022.987 - 14080.224: 97.6298% ( 6) 00:30:45.331 14080.224 - 14137.460: 97.6695% ( 6) 00:30:45.331 14137.460 - 14194.697: 97.7092% ( 6) 00:30:45.331 14194.697 - 14251.934: 97.7556% ( 7) 00:30:45.331 14251.934 - 14309.170: 97.8151% ( 9) 00:30:45.331 14309.170 - 14366.407: 97.8483% ( 5) 00:30:45.331 14366.407 - 14423.644: 97.8814% ( 5) 00:30:45.331 14423.644 - 14480.880: 97.9145% ( 5) 00:30:45.331 14480.880 - 14538.117: 97.9608% ( 7) 00:30:45.331 14538.117 - 14595.354: 97.9873% ( 4) 00:30:45.331 14595.354 - 14652.590: 98.0204% ( 5) 00:30:45.331 14652.590 - 14767.064: 98.0932% ( 11) 00:30:45.331 14767.064 - 14881.537: 98.1396% ( 7) 00:30:45.331 14881.537 - 14996.010: 98.1793% ( 6) 00:30:45.331 14996.010 - 15110.484: 98.2654% ( 13) 00:30:45.331 15110.484 - 15224.957: 98.3514% ( 13) 00:30:45.331 15224.957 - 15339.431: 98.4243% ( 11) 00:30:45.331 15339.431 - 15453.904: 98.4574% ( 5) 00:30:45.331 15453.904 - 15568.377: 98.4971% ( 6) 00:30:45.331 15568.377 - 15682.851: 98.5368% ( 6) 00:30:45.331 15682.851 - 15797.324: 98.5832% ( 7) 00:30:45.331 15797.324 - 15911.797: 98.6229% ( 6) 00:30:45.331 15911.797 - 16026.271: 98.6626% ( 6) 00:30:45.331 16026.271 - 16140.744: 98.7023% ( 6) 00:30:45.331 16140.744 - 16255.217: 98.7421% ( 6) 00:30:45.331 16255.217 - 16369.691: 98.7884% ( 7) 00:30:45.331 16369.691 - 16484.164: 98.8281% ( 6) 00:30:45.331 16484.164 - 16598.638: 98.8745% ( 7) 00:30:45.331 16598.638 - 16713.111: 98.9142% ( 6) 00:30:45.331 16713.111 - 16827.584: 98.9605% ( 7) 00:30:45.331 16827.584 - 16942.058: 99.0069% ( 7) 00:30:45.331 16942.058 - 17056.531: 99.0532% ( 7) 00:30:45.331 17056.531 - 17171.004: 99.0930% ( 6) 00:30:45.331 17171.004 - 17285.478: 99.1393% ( 7) 00:30:45.331 17285.478 - 17399.951: 99.1525% ( 2) 00:30:45.331 32510.435 - 32739.382: 99.1724% ( 3) 00:30:45.331 32739.382 - 32968.328: 99.2188% ( 7) 00:30:45.331 32968.328 - 33197.275: 99.2651% ( 7) 00:30:45.331 33197.275 - 33426.222: 99.3114% ( 7) 00:30:45.331 33426.222 - 33655.169: 99.3644% ( 8) 00:30:45.331 33655.169 - 33884.115: 99.4108% ( 7) 00:30:45.331 33884.115 - 34113.062: 99.4571% ( 7) 00:30:45.331 34113.062 - 34342.009: 99.4968% ( 6) 00:30:45.331 34342.009 - 34570.955: 99.5498% ( 8) 00:30:45.331 34570.955 - 34799.902: 99.5763% ( 4) 00:30:45.331 38920.943 - 39149.890: 99.6226% ( 7) 00:30:45.331 39149.890 - 39378.837: 99.6690% ( 7) 00:30:45.331 39378.837 - 39607.783: 99.7153% ( 7) 00:30:45.331 39607.783 - 39836.730: 99.7617% ( 7) 00:30:45.331 39836.730 - 40065.677: 99.8146% ( 8) 00:30:45.331 40065.677 - 40294.624: 99.8676% ( 8) 00:30:45.331 40294.624 - 40523.570: 99.9139% ( 7) 00:30:45.331 40523.570 - 40752.517: 99.9669% ( 8) 00:30:45.331 40752.517 - 40981.464: 100.0000% ( 5) 00:30:45.331 00:30:45.331 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:45.331 ============================================================================== 00:30:45.332 Range in us Cumulative IO count 00:30:45.332 6811.165 - 6839.783: 0.0199% ( 3) 00:30:45.332 6839.783 - 6868.402: 0.0331% ( 2) 00:30:45.332 6868.402 - 6897.020: 0.1059% ( 11) 00:30:45.332 6897.020 - 6925.638: 0.2913% ( 28) 00:30:45.332 6925.638 - 6954.257: 0.5230% ( 35) 00:30:45.332 6954.257 - 6982.875: 0.8541% ( 50) 00:30:45.332 6982.875 - 7011.493: 1.2447% ( 59) 00:30:45.332 7011.493 - 7040.112: 1.6552% ( 62) 00:30:45.332 7040.112 - 7068.730: 2.1981% ( 82) 00:30:45.332 7068.730 - 7097.348: 2.8006% ( 91) 00:30:45.332 7097.348 - 7125.967: 3.4958% ( 105) 00:30:45.332 7125.967 - 7154.585: 4.1512% ( 99) 00:30:45.332 7154.585 - 7183.203: 4.8265% ( 102) 00:30:45.332 7183.203 - 7211.822: 5.6475% ( 124) 00:30:45.332 7211.822 - 7240.440: 6.5016% ( 129) 00:30:45.332 7240.440 - 7269.059: 7.4020% ( 136) 00:30:45.332 7269.059 - 7297.677: 8.3355% ( 141) 00:30:45.332 7297.677 - 7326.295: 9.4015% ( 161) 00:30:45.332 7326.295 - 7383.532: 11.6856% ( 345) 00:30:45.332 7383.532 - 7440.769: 14.4134% ( 412) 00:30:45.332 7440.769 - 7498.005: 17.5119% ( 468) 00:30:45.332 7498.005 - 7555.242: 20.9282% ( 516) 00:30:45.332 7555.242 - 7612.479: 24.8279% ( 589) 00:30:45.332 7612.479 - 7669.715: 29.0651% ( 640) 00:30:45.332 7669.715 - 7726.952: 33.4547% ( 663) 00:30:45.332 7726.952 - 7784.189: 38.0694% ( 697) 00:30:45.332 7784.189 - 7841.425: 42.6708% ( 695) 00:30:45.332 7841.425 - 7898.662: 47.1597% ( 678) 00:30:45.332 7898.662 - 7955.899: 51.5360% ( 661) 00:30:45.332 7955.899 - 8013.135: 55.5945% ( 613) 00:30:45.332 8013.135 - 8070.372: 59.5802% ( 602) 00:30:45.332 8070.372 - 8127.609: 63.2680% ( 557) 00:30:45.332 8127.609 - 8184.845: 66.7439% ( 525) 00:30:45.332 8184.845 - 8242.082: 69.8424% ( 468) 00:30:45.332 8242.082 - 8299.319: 72.8814% ( 459) 00:30:45.332 8299.319 - 8356.555: 75.7084% ( 427) 00:30:45.332 8356.555 - 8413.792: 78.4627% ( 416) 00:30:45.332 8413.792 - 8471.029: 80.9719% ( 379) 00:30:45.332 8471.029 - 8528.266: 83.0641% ( 316) 00:30:45.332 8528.266 - 8585.502: 84.7391% ( 253) 00:30:45.332 8585.502 - 8642.739: 86.0832% ( 203) 00:30:45.332 8642.739 - 8699.976: 87.0961% ( 153) 00:30:45.332 8699.976 - 8757.212: 87.8774% ( 118) 00:30:45.332 8757.212 - 8814.449: 88.4335% ( 84) 00:30:45.332 8814.449 - 8871.686: 88.8904% ( 69) 00:30:45.332 8871.686 - 8928.922: 89.2744% ( 58) 00:30:45.332 8928.922 - 8986.159: 89.5855% ( 47) 00:30:45.332 8986.159 - 9043.396: 89.8371% ( 38) 00:30:45.332 9043.396 - 9100.632: 90.0953% ( 39) 00:30:45.332 9100.632 - 9157.869: 90.3734% ( 42) 00:30:45.332 9157.869 - 9215.106: 90.6184% ( 37) 00:30:45.332 9215.106 - 9272.342: 90.9097% ( 44) 00:30:45.332 9272.342 - 9329.579: 91.1216% ( 32) 00:30:45.332 9329.579 - 9386.816: 91.3930% ( 41) 00:30:45.332 9386.816 - 9444.052: 91.6314% ( 36) 00:30:45.332 9444.052 - 9501.289: 91.8498% ( 33) 00:30:45.332 9501.289 - 9558.526: 92.0418% ( 29) 00:30:45.332 9558.526 - 9615.762: 92.1941% ( 23) 00:30:45.332 9615.762 - 9672.999: 92.3596% ( 25) 00:30:45.332 9672.999 - 9730.236: 92.5053% ( 22) 00:30:45.332 9730.236 - 9787.472: 92.6708% ( 25) 00:30:45.332 9787.472 - 9844.709: 92.8363% ( 25) 00:30:45.332 9844.709 - 9901.946: 92.9952% ( 24) 00:30:45.332 9901.946 - 9959.183: 93.1475% ( 23) 00:30:45.332 9959.183 - 10016.419: 93.3329% ( 28) 00:30:45.332 10016.419 - 10073.656: 93.5183% ( 28) 00:30:45.332 10073.656 - 10130.893: 93.6904% ( 26) 00:30:45.332 10130.893 - 10188.129: 93.8030% ( 17) 00:30:45.332 10188.129 - 10245.366: 93.9288% ( 19) 00:30:45.332 10245.366 - 10302.603: 94.0612% ( 20) 00:30:45.332 10302.603 - 10359.839: 94.1870% ( 19) 00:30:45.332 10359.839 - 10417.076: 94.3326% ( 22) 00:30:45.332 10417.076 - 10474.313: 94.4518% ( 18) 00:30:45.332 10474.313 - 10531.549: 94.5710% ( 18) 00:30:45.332 10531.549 - 10588.786: 94.6901% ( 18) 00:30:45.332 10588.786 - 10646.023: 94.8027% ( 17) 00:30:45.332 10646.023 - 10703.259: 94.9086% ( 16) 00:30:45.332 10703.259 - 10760.496: 95.0079% ( 15) 00:30:45.332 10760.496 - 10817.733: 95.1337% ( 19) 00:30:45.332 10817.733 - 10874.969: 95.2397% ( 16) 00:30:45.332 10874.969 - 10932.206: 95.3059% ( 10) 00:30:45.332 10932.206 - 10989.443: 95.3853% ( 12) 00:30:45.332 10989.443 - 11046.679: 95.4582% ( 11) 00:30:45.332 11046.679 - 11103.916: 95.5310% ( 11) 00:30:45.332 11103.916 - 11161.153: 95.6038% ( 11) 00:30:45.332 11161.153 - 11218.390: 95.6833% ( 12) 00:30:45.332 11218.390 - 11275.626: 95.7627% ( 12) 00:30:45.332 11275.626 - 11332.863: 95.8289% ( 10) 00:30:45.332 11332.863 - 11390.100: 95.9084% ( 12) 00:30:45.332 11390.100 - 11447.336: 95.9680% ( 9) 00:30:45.332 11447.336 - 11504.573: 96.0209% ( 8) 00:30:45.332 11504.573 - 11561.810: 96.0805% ( 9) 00:30:45.332 11561.810 - 11619.046: 96.1136% ( 5) 00:30:45.332 11619.046 - 11676.283: 96.1401% ( 4) 00:30:45.332 11676.283 - 11733.520: 96.1600% ( 3) 00:30:45.332 11733.520 - 11790.756: 96.1864% ( 4) 00:30:45.332 12076.940 - 12134.176: 96.2063% ( 3) 00:30:45.332 12134.176 - 12191.413: 96.2195% ( 2) 00:30:45.332 12191.413 - 12248.650: 96.2460% ( 4) 00:30:45.332 12248.650 - 12305.886: 96.2659% ( 3) 00:30:45.332 12305.886 - 12363.123: 96.2791% ( 2) 00:30:45.332 12363.123 - 12420.360: 96.2990% ( 3) 00:30:45.332 12420.360 - 12477.597: 96.3189% ( 3) 00:30:45.332 12477.597 - 12534.833: 96.3387% ( 3) 00:30:45.332 12534.833 - 12592.070: 96.3520% ( 2) 00:30:45.332 12592.070 - 12649.307: 96.3718% ( 3) 00:30:45.332 12649.307 - 12706.543: 96.3851% ( 2) 00:30:45.332 12706.543 - 12763.780: 96.4049% ( 3) 00:30:45.332 12763.780 - 12821.017: 96.4314% ( 4) 00:30:45.332 12821.017 - 12878.253: 96.4778% ( 7) 00:30:45.332 12878.253 - 12935.490: 96.5572% ( 12) 00:30:45.332 12935.490 - 12992.727: 96.6168% ( 9) 00:30:45.332 12992.727 - 13049.963: 96.6896% ( 11) 00:30:45.332 13049.963 - 13107.200: 96.7492% ( 9) 00:30:45.332 13107.200 - 13164.437: 96.8287% ( 12) 00:30:45.332 13164.437 - 13221.673: 96.8949% ( 10) 00:30:45.332 13221.673 - 13278.910: 96.9544% ( 9) 00:30:45.332 13278.910 - 13336.147: 97.0207% ( 10) 00:30:45.332 13336.147 - 13393.383: 97.0802% ( 9) 00:30:45.332 13393.383 - 13450.620: 97.1266% ( 7) 00:30:45.332 13450.620 - 13507.857: 97.1796% ( 8) 00:30:45.332 13507.857 - 13565.093: 97.2193% ( 6) 00:30:45.332 13565.093 - 13622.330: 97.2656% ( 7) 00:30:45.332 13622.330 - 13679.567: 97.3186% ( 8) 00:30:45.332 13679.567 - 13736.803: 97.3782% ( 9) 00:30:45.332 13736.803 - 13794.040: 97.4245% ( 7) 00:30:45.332 13794.040 - 13851.277: 97.4642% ( 6) 00:30:45.332 13851.277 - 13908.514: 97.5040% ( 6) 00:30:45.332 13908.514 - 13965.750: 97.5238% ( 3) 00:30:45.332 13965.750 - 14022.987: 97.5371% ( 2) 00:30:45.332 14022.987 - 14080.224: 97.5503% ( 2) 00:30:45.332 14080.224 - 14137.460: 97.5636% ( 2) 00:30:45.332 14137.460 - 14194.697: 97.5768% ( 2) 00:30:45.332 14194.697 - 14251.934: 97.5900% ( 2) 00:30:45.332 14251.934 - 14309.170: 97.6033% ( 2) 00:30:45.332 14309.170 - 14366.407: 97.6165% ( 2) 00:30:45.332 14366.407 - 14423.644: 97.6364% ( 3) 00:30:45.332 14423.644 - 14480.880: 97.6761% ( 6) 00:30:45.332 14480.880 - 14538.117: 97.7092% ( 5) 00:30:45.332 14538.117 - 14595.354: 97.7423% ( 5) 00:30:45.332 14595.354 - 14652.590: 97.7688% ( 4) 00:30:45.332 14652.590 - 14767.064: 97.8350% ( 10) 00:30:45.332 14767.064 - 14881.537: 97.9277% ( 14) 00:30:45.332 14881.537 - 14996.010: 98.0403% ( 17) 00:30:45.332 14996.010 - 15110.484: 98.1528% ( 17) 00:30:45.332 15110.484 - 15224.957: 98.2654% ( 17) 00:30:45.332 15224.957 - 15339.431: 98.3713% ( 16) 00:30:45.332 15339.431 - 15453.904: 98.4838% ( 17) 00:30:45.332 15453.904 - 15568.377: 98.5699% ( 13) 00:30:45.332 15568.377 - 15682.851: 98.6494% ( 12) 00:30:45.332 15682.851 - 15797.324: 98.6957% ( 7) 00:30:45.332 15797.324 - 15911.797: 98.7288% ( 5) 00:30:45.332 16026.271 - 16140.744: 98.7487% ( 3) 00:30:45.332 16140.744 - 16255.217: 98.7884% ( 6) 00:30:45.332 16255.217 - 16369.691: 98.8347% ( 7) 00:30:45.332 16369.691 - 16484.164: 98.8811% ( 7) 00:30:45.332 16484.164 - 16598.638: 98.9208% ( 6) 00:30:45.332 16598.638 - 16713.111: 98.9672% ( 7) 00:30:45.332 16713.111 - 16827.584: 99.0135% ( 7) 00:30:45.332 16827.584 - 16942.058: 99.0532% ( 6) 00:30:45.332 16942.058 - 17056.531: 99.0996% ( 7) 00:30:45.332 17056.531 - 17171.004: 99.1459% ( 7) 00:30:45.332 17171.004 - 17285.478: 99.1525% ( 1) 00:30:45.332 29992.021 - 30220.968: 99.1856% ( 5) 00:30:45.332 30220.968 - 30449.914: 99.2386% ( 8) 00:30:45.332 30449.914 - 30678.861: 99.2783% ( 6) 00:30:45.332 30678.861 - 30907.808: 99.3247% ( 7) 00:30:45.332 30907.808 - 31136.755: 99.3776% ( 8) 00:30:45.332 31136.755 - 31365.701: 99.4240% ( 7) 00:30:45.332 31365.701 - 31594.648: 99.4703% ( 7) 00:30:45.332 31594.648 - 31823.595: 99.5167% ( 7) 00:30:45.332 31823.595 - 32052.541: 99.5432% ( 4) 00:30:45.332 32052.541 - 32281.488: 99.5763% ( 5) 00:30:45.332 36402.529 - 36631.476: 99.5961% ( 3) 00:30:45.332 36631.476 - 36860.423: 99.6425% ( 7) 00:30:45.332 36860.423 - 37089.369: 99.6888% ( 7) 00:30:45.332 37089.369 - 37318.316: 99.7418% ( 8) 00:30:45.332 37318.316 - 37547.263: 99.7881% ( 7) 00:30:45.333 37547.263 - 37776.210: 99.8345% ( 7) 00:30:45.333 37776.210 - 38005.156: 99.8874% ( 8) 00:30:45.333 38005.156 - 38234.103: 99.9338% ( 7) 00:30:45.333 38234.103 - 38463.050: 99.9801% ( 7) 00:30:45.333 38463.050 - 38691.997: 100.0000% ( 3) 00:30:45.333 00:30:45.333 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:45.333 ============================================================================== 00:30:45.333 Range in us Cumulative IO count 00:30:45.333 6811.165 - 6839.783: 0.0132% ( 2) 00:30:45.333 6839.783 - 6868.402: 0.0461% ( 5) 00:30:45.333 6868.402 - 6897.020: 0.1648% ( 18) 00:30:45.333 6897.020 - 6925.638: 0.3033% ( 21) 00:30:45.333 6925.638 - 6954.257: 0.5999% ( 45) 00:30:45.333 6954.257 - 6982.875: 0.9098% ( 47) 00:30:45.333 6982.875 - 7011.493: 1.3054% ( 60) 00:30:45.333 7011.493 - 7040.112: 1.7405% ( 66) 00:30:45.333 7040.112 - 7068.730: 2.2350% ( 75) 00:30:45.333 7068.730 - 7097.348: 2.7822% ( 83) 00:30:45.333 7097.348 - 7125.967: 3.4546% ( 102) 00:30:45.333 7125.967 - 7154.585: 4.1205% ( 101) 00:30:45.333 7154.585 - 7183.203: 4.8589% ( 112) 00:30:45.333 7183.203 - 7211.822: 5.6237% ( 116) 00:30:45.333 7211.822 - 7240.440: 6.4544% ( 126) 00:30:45.333 7240.440 - 7269.059: 7.3642% ( 138) 00:30:45.333 7269.059 - 7297.677: 8.3267% ( 146) 00:30:45.333 7297.677 - 7326.295: 9.3223% ( 151) 00:30:45.333 7326.295 - 7383.532: 11.4781% ( 327) 00:30:45.333 7383.532 - 7440.769: 14.0823% ( 395) 00:30:45.333 7440.769 - 7498.005: 17.2534% ( 481) 00:30:45.333 7498.005 - 7555.242: 20.7410% ( 529) 00:30:45.333 7555.242 - 7612.479: 24.4989% ( 570) 00:30:45.333 7612.479 - 7669.715: 28.6392% ( 628) 00:30:45.333 7669.715 - 7726.952: 33.1224% ( 680) 00:30:45.333 7726.952 - 7784.189: 37.6912% ( 693) 00:30:45.333 7784.189 - 7841.425: 42.2600% ( 693) 00:30:45.333 7841.425 - 7898.662: 46.7893% ( 687) 00:30:45.333 7898.662 - 7955.899: 51.2724% ( 680) 00:30:45.333 7955.899 - 8013.135: 55.3336% ( 616) 00:30:45.333 8013.135 - 8070.372: 59.2234% ( 590) 00:30:45.333 8070.372 - 8127.609: 62.9088% ( 559) 00:30:45.333 8127.609 - 8184.845: 66.2711% ( 510) 00:30:45.333 8184.845 - 8242.082: 69.5280% ( 494) 00:30:45.333 8242.082 - 8299.319: 72.4552% ( 444) 00:30:45.333 8299.319 - 8356.555: 75.2571% ( 425) 00:30:45.333 8356.555 - 8413.792: 77.8877% ( 399) 00:30:45.333 8413.792 - 8471.029: 80.4193% ( 384) 00:30:45.333 8471.029 - 8528.266: 82.4631% ( 310) 00:30:45.333 8528.266 - 8585.502: 84.1443% ( 255) 00:30:45.333 8585.502 - 8642.739: 85.4233% ( 194) 00:30:45.333 8642.739 - 8699.976: 86.4781% ( 160) 00:30:45.333 8699.976 - 8757.212: 87.3088% ( 126) 00:30:45.333 8757.212 - 8814.449: 87.9088% ( 91) 00:30:45.333 8814.449 - 8871.686: 88.3637% ( 69) 00:30:45.333 8871.686 - 8928.922: 88.7197% ( 54) 00:30:45.333 8928.922 - 8986.159: 89.0493% ( 50) 00:30:45.333 8986.159 - 9043.396: 89.3460% ( 45) 00:30:45.333 9043.396 - 9100.632: 89.6756% ( 50) 00:30:45.333 9100.632 - 9157.869: 90.0185% ( 52) 00:30:45.333 9157.869 - 9215.106: 90.3547% ( 51) 00:30:45.333 9215.106 - 9272.342: 90.6975% ( 52) 00:30:45.333 9272.342 - 9329.579: 90.9942% ( 45) 00:30:45.333 9329.579 - 9386.816: 91.2843% ( 44) 00:30:45.333 9386.816 - 9444.052: 91.5810% ( 45) 00:30:45.333 9444.052 - 9501.289: 91.8315% ( 38) 00:30:45.333 9501.289 - 9558.526: 92.0754% ( 37) 00:30:45.333 9558.526 - 9615.762: 92.3062% ( 35) 00:30:45.333 9615.762 - 9672.999: 92.5171% ( 32) 00:30:45.333 9672.999 - 9730.236: 92.7413% ( 34) 00:30:45.333 9730.236 - 9787.472: 92.9523% ( 32) 00:30:45.333 9787.472 - 9844.709: 93.1632% ( 32) 00:30:45.333 9844.709 - 9901.946: 93.3610% ( 30) 00:30:45.333 9901.946 - 9959.183: 93.5852% ( 34) 00:30:45.333 9959.183 - 10016.419: 93.7500% ( 25) 00:30:45.333 10016.419 - 10073.656: 93.8950% ( 22) 00:30:45.333 10073.656 - 10130.893: 94.0137% ( 18) 00:30:45.333 10130.893 - 10188.129: 94.1456% ( 20) 00:30:45.333 10188.129 - 10245.366: 94.2708% ( 19) 00:30:45.333 10245.366 - 10302.603: 94.3631% ( 14) 00:30:45.333 10302.603 - 10359.839: 94.4620% ( 15) 00:30:45.333 10359.839 - 10417.076: 94.5609% ( 15) 00:30:45.333 10417.076 - 10474.313: 94.6400% ( 12) 00:30:45.333 10474.313 - 10531.549: 94.7191% ( 12) 00:30:45.333 10531.549 - 10588.786: 94.7917% ( 11) 00:30:45.333 10588.786 - 10646.023: 94.8576% ( 10) 00:30:45.333 10646.023 - 10703.259: 94.9433% ( 13) 00:30:45.333 10703.259 - 10760.496: 95.0356% ( 14) 00:30:45.333 10760.496 - 10817.733: 95.1015% ( 10) 00:30:45.333 10817.733 - 10874.969: 95.1477% ( 7) 00:30:45.333 10874.969 - 10932.206: 95.1938% ( 7) 00:30:45.333 10932.206 - 10989.443: 95.2400% ( 7) 00:30:45.333 10989.443 - 11046.679: 95.2927% ( 8) 00:30:45.333 11046.679 - 11103.916: 95.3455% ( 8) 00:30:45.333 11103.916 - 11161.153: 95.3850% ( 6) 00:30:45.333 11161.153 - 11218.390: 95.4378% ( 8) 00:30:45.333 11218.390 - 11275.626: 95.4707% ( 5) 00:30:45.333 11275.626 - 11332.863: 95.4971% ( 4) 00:30:45.333 11332.863 - 11390.100: 95.5432% ( 7) 00:30:45.333 11390.100 - 11447.336: 95.5960% ( 8) 00:30:45.333 11447.336 - 11504.573: 95.6290% ( 5) 00:30:45.333 11504.573 - 11561.810: 95.6685% ( 6) 00:30:45.333 11561.810 - 11619.046: 95.7015% ( 5) 00:30:45.333 11619.046 - 11676.283: 95.7674% ( 10) 00:30:45.333 11676.283 - 11733.520: 95.8136% ( 7) 00:30:45.333 11733.520 - 11790.756: 95.8663% ( 8) 00:30:45.333 11790.756 - 11847.993: 95.9190% ( 8) 00:30:45.333 11847.993 - 11905.230: 95.9850% ( 10) 00:30:45.333 11905.230 - 11962.466: 96.0443% ( 9) 00:30:45.333 11962.466 - 12019.703: 96.0970% ( 8) 00:30:45.333 12019.703 - 12076.940: 96.1498% ( 8) 00:30:45.333 12076.940 - 12134.176: 96.2091% ( 9) 00:30:45.333 12134.176 - 12191.413: 96.2619% ( 8) 00:30:45.333 12191.413 - 12248.650: 96.3212% ( 9) 00:30:45.333 12248.650 - 12305.886: 96.3805% ( 9) 00:30:45.333 12305.886 - 12363.123: 96.4333% ( 8) 00:30:45.333 12363.123 - 12420.360: 96.4794% ( 7) 00:30:45.333 12420.360 - 12477.597: 96.5124% ( 5) 00:30:45.333 12477.597 - 12534.833: 96.5520% ( 6) 00:30:45.333 12534.833 - 12592.070: 96.5915% ( 6) 00:30:45.333 12592.070 - 12649.307: 96.6377% ( 7) 00:30:45.333 12649.307 - 12706.543: 96.6772% ( 6) 00:30:45.333 12706.543 - 12763.780: 96.7168% ( 6) 00:30:45.333 12763.780 - 12821.017: 96.7497% ( 5) 00:30:45.333 12821.017 - 12878.253: 96.7893% ( 6) 00:30:45.333 12878.253 - 12935.490: 96.8091% ( 3) 00:30:45.333 12935.490 - 12992.727: 96.8354% ( 4) 00:30:45.333 12992.727 - 13049.963: 96.8486% ( 2) 00:30:45.333 13049.963 - 13107.200: 96.8684% ( 3) 00:30:45.333 13107.200 - 13164.437: 96.8948% ( 4) 00:30:45.333 13164.437 - 13221.673: 96.9146% ( 3) 00:30:45.333 13221.673 - 13278.910: 96.9343% ( 3) 00:30:45.333 13278.910 - 13336.147: 96.9607% ( 4) 00:30:45.333 13336.147 - 13393.383: 96.9805% ( 3) 00:30:45.333 13393.383 - 13450.620: 97.0134% ( 5) 00:30:45.333 13450.620 - 13507.857: 97.0728% ( 9) 00:30:45.333 13507.857 - 13565.093: 97.1057% ( 5) 00:30:45.333 13565.093 - 13622.330: 97.1453% ( 6) 00:30:45.333 13622.330 - 13679.567: 97.1651% ( 3) 00:30:45.333 13679.567 - 13736.803: 97.1915% ( 4) 00:30:45.333 13736.803 - 13794.040: 97.2178% ( 4) 00:30:45.333 13794.040 - 13851.277: 97.2442% ( 4) 00:30:45.333 13851.277 - 13908.514: 97.2706% ( 4) 00:30:45.333 13908.514 - 13965.750: 97.2969% ( 4) 00:30:45.333 13965.750 - 14022.987: 97.3233% ( 4) 00:30:45.333 14022.987 - 14080.224: 97.3431% ( 3) 00:30:45.333 14080.224 - 14137.460: 97.3826% ( 6) 00:30:45.333 14137.460 - 14194.697: 97.4288% ( 7) 00:30:45.333 14194.697 - 14251.934: 97.4684% ( 6) 00:30:45.333 14251.934 - 14309.170: 97.5145% ( 7) 00:30:45.333 14309.170 - 14366.407: 97.5541% ( 6) 00:30:45.333 14366.407 - 14423.644: 97.5804% ( 4) 00:30:45.333 14423.644 - 14480.880: 97.6068% ( 4) 00:30:45.333 14480.880 - 14538.117: 97.6398% ( 5) 00:30:45.333 14538.117 - 14595.354: 97.6727% ( 5) 00:30:45.333 14595.354 - 14652.590: 97.7189% ( 7) 00:30:45.333 14652.590 - 14767.064: 97.7848% ( 10) 00:30:45.333 14767.064 - 14881.537: 97.8441% ( 9) 00:30:45.333 14881.537 - 14996.010: 97.9167% ( 11) 00:30:45.333 14996.010 - 15110.484: 97.9760% ( 9) 00:30:45.333 15110.484 - 15224.957: 98.0419% ( 10) 00:30:45.333 15224.957 - 15339.431: 98.1013% ( 9) 00:30:45.333 15339.431 - 15453.904: 98.1474% ( 7) 00:30:45.333 15453.904 - 15568.377: 98.2133% ( 10) 00:30:45.333 15568.377 - 15682.851: 98.2991% ( 13) 00:30:45.333 15682.851 - 15797.324: 98.4177% ( 18) 00:30:45.333 15797.324 - 15911.797: 98.5298% ( 17) 00:30:45.333 15911.797 - 16026.271: 98.6419% ( 17) 00:30:45.333 16026.271 - 16140.744: 98.7605% ( 18) 00:30:45.333 16140.744 - 16255.217: 98.8528% ( 14) 00:30:45.333 16255.217 - 16369.691: 98.9386% ( 13) 00:30:45.333 16369.691 - 16484.164: 99.0309% ( 14) 00:30:45.333 16484.164 - 16598.638: 99.1100% ( 12) 00:30:45.333 16598.638 - 16713.111: 99.1495% ( 6) 00:30:45.333 16713.111 - 16827.584: 99.1561% ( 1) 00:30:45.333 22780.199 - 22894.672: 99.1693% ( 2) 00:30:45.333 22894.672 - 23009.146: 99.1891% ( 3) 00:30:45.333 23009.146 - 23123.619: 99.2155% ( 4) 00:30:45.333 23123.619 - 23238.093: 99.2352% ( 3) 00:30:45.333 23238.093 - 23352.566: 99.2550% ( 3) 00:30:45.333 23352.566 - 23467.039: 99.2814% ( 4) 00:30:45.333 23467.039 - 23581.513: 99.3012% ( 3) 00:30:45.333 23581.513 - 23695.986: 99.3209% ( 3) 00:30:45.333 23695.986 - 23810.459: 99.3473% ( 4) 00:30:45.333 23810.459 - 23924.933: 99.3737% ( 4) 00:30:45.333 23924.933 - 24039.406: 99.3935% ( 3) 00:30:45.333 24039.406 - 24153.879: 99.4198% ( 4) 00:30:45.333 24153.879 - 24268.353: 99.4396% ( 3) 00:30:45.333 24268.353 - 24382.826: 99.4660% ( 4) 00:30:45.333 24382.826 - 24497.300: 99.4858% ( 3) 00:30:45.334 24497.300 - 24611.773: 99.5121% ( 4) 00:30:45.334 24611.773 - 24726.246: 99.5319% ( 3) 00:30:45.334 24726.246 - 24840.720: 99.5583% ( 4) 00:30:45.334 24840.720 - 24955.193: 99.5781% ( 3) 00:30:45.334 29076.234 - 29190.707: 99.5847% ( 1) 00:30:45.334 29190.707 - 29305.181: 99.6044% ( 3) 00:30:45.334 29305.181 - 29534.128: 99.6506% ( 7) 00:30:45.334 29534.128 - 29763.074: 99.6967% ( 7) 00:30:45.334 29763.074 - 29992.021: 99.7429% ( 7) 00:30:45.334 29992.021 - 30220.968: 99.7890% ( 7) 00:30:45.334 30220.968 - 30449.914: 99.8418% ( 8) 00:30:45.334 30449.914 - 30678.861: 99.8879% ( 7) 00:30:45.334 30678.861 - 30907.808: 99.9407% ( 8) 00:30:45.334 30907.808 - 31136.755: 99.9868% ( 7) 00:30:45.334 31136.755 - 31365.701: 100.0000% ( 2) 00:30:45.334 00:30:45.334 17:27:22 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:46.712 Initializing NVMe Controllers 00:30:46.712 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:46.712 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:46.712 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:46.712 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:46.712 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:46.712 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:46.712 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:46.712 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:46.712 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:46.712 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:46.712 Initialization complete. Launching workers. 00:30:46.712 ======================================================== 00:30:46.712 Latency(us) 00:30:46.712 Device Information : IOPS MiB/s Average min max 00:30:46.712 PCIE (0000:00:10.0) NSID 1 from core 0: 5791.58 67.87 22251.23 12675.88 50655.36 00:30:46.712 PCIE (0000:00:11.0) NSID 1 from core 0: 5791.58 67.87 22228.13 12995.75 48747.06 00:30:46.712 PCIE (0000:00:13.0) NSID 1 from core 0: 5791.58 67.87 22206.12 12738.29 46933.07 00:30:46.712 PCIE (0000:00:12.0) NSID 1 from core 0: 5791.58 67.87 22180.83 13181.65 45177.22 00:30:46.712 PCIE (0000:00:12.0) NSID 2 from core 0: 5791.58 67.87 22149.85 13359.54 43418.23 00:30:46.712 PCIE (0000:00:12.0) NSID 3 from core 0: 5855.22 68.62 21873.75 13131.50 32228.85 00:30:46.712 ======================================================== 00:30:46.712 Total : 34813.12 407.97 22147.82 12675.88 50655.36 00:30:46.712 00:30:46.712 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:46.712 ================================================================================= 00:30:46.712 1.00000% : 13393.383us 00:30:46.712 10.00000% : 15110.484us 00:30:46.712 25.00000% : 17972.318us 00:30:46.712 50.00000% : 23467.039us 00:30:46.712 75.00000% : 25413.086us 00:30:46.712 90.00000% : 26672.293us 00:30:46.712 95.00000% : 27130.187us 00:30:46.712 98.00000% : 36402.529us 00:30:46.712 99.00000% : 47620.919us 00:30:46.712 99.50000% : 49223.546us 00:30:46.712 99.90000% : 50368.279us 00:30:46.712 99.99000% : 50826.173us 00:30:46.712 99.99900% : 50826.173us 00:30:46.712 99.99990% : 50826.173us 00:30:46.712 99.99999% : 50826.173us 00:30:46.712 00:30:46.712 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:46.712 ================================================================================= 00:30:46.712 1.00000% : 13450.620us 00:30:46.712 10.00000% : 15110.484us 00:30:46.712 25.00000% : 18201.265us 00:30:46.712 50.00000% : 23924.933us 00:30:46.712 75.00000% : 25069.666us 00:30:46.712 90.00000% : 26099.927us 00:30:46.712 95.00000% : 26672.293us 00:30:46.712 98.00000% : 34342.009us 00:30:46.712 99.00000% : 46018.292us 00:30:46.712 99.50000% : 47391.972us 00:30:46.712 99.90000% : 48536.706us 00:30:46.712 99.99000% : 48765.652us 00:30:46.712 99.99900% : 48765.652us 00:30:46.712 99.99990% : 48765.652us 00:30:46.712 99.99999% : 48765.652us 00:30:46.712 00:30:46.712 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:46.712 ================================================================================= 00:30:46.712 1.00000% : 13393.383us 00:30:46.712 10.00000% : 15224.957us 00:30:46.712 25.00000% : 18430.211us 00:30:46.712 50.00000% : 23924.933us 00:30:46.712 75.00000% : 25069.666us 00:30:46.712 90.00000% : 26214.400us 00:30:46.712 95.00000% : 26672.293us 00:30:46.712 98.00000% : 32968.328us 00:30:46.712 99.00000% : 44186.718us 00:30:46.712 99.50000% : 45789.345us 00:30:46.712 99.90000% : 46705.132us 00:30:46.712 99.99000% : 46934.079us 00:30:46.712 99.99900% : 46934.079us 00:30:46.712 99.99990% : 46934.079us 00:30:46.712 99.99999% : 46934.079us 00:30:46.712 00:30:46.712 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:46.712 ================================================================================= 00:30:46.712 1.00000% : 13679.567us 00:30:46.712 10.00000% : 15224.957us 00:30:46.712 25.00000% : 17972.318us 00:30:46.712 50.00000% : 23924.933us 00:30:46.712 75.00000% : 25184.140us 00:30:46.712 90.00000% : 26214.400us 00:30:46.712 95.00000% : 26672.293us 00:30:46.712 98.00000% : 31365.701us 00:30:46.712 99.00000% : 42584.091us 00:30:46.712 99.50000% : 43957.771us 00:30:46.712 99.90000% : 45102.505us 00:30:46.712 99.99000% : 45331.452us 00:30:46.712 99.99900% : 45331.452us 00:30:46.712 99.99990% : 45331.452us 00:30:46.712 99.99999% : 45331.452us 00:30:46.712 00:30:46.712 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:46.712 ================================================================================= 00:30:46.712 1.00000% : 13679.567us 00:30:46.712 10.00000% : 14996.010us 00:30:46.712 25.00000% : 17857.845us 00:30:46.712 50.00000% : 24039.406us 00:30:46.712 75.00000% : 25184.140us 00:30:46.712 90.00000% : 26214.400us 00:30:46.712 95.00000% : 26672.293us 00:30:46.712 98.00000% : 29763.074us 00:30:46.712 99.00000% : 40752.517us 00:30:46.712 99.50000% : 42126.197us 00:30:46.712 99.90000% : 43270.931us 00:30:46.712 99.99000% : 43499.878us 00:30:46.712 99.99900% : 43499.878us 00:30:46.712 99.99990% : 43499.878us 00:30:46.712 99.99999% : 43499.878us 00:30:46.712 00:30:46.712 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:46.712 ================================================================================= 00:30:46.712 1.00000% : 13679.567us 00:30:46.712 10.00000% : 15110.484us 00:30:46.712 25.00000% : 17972.318us 00:30:46.712 50.00000% : 23810.459us 00:30:46.712 75.00000% : 25069.666us 00:30:46.712 90.00000% : 26099.927us 00:30:46.712 95.00000% : 26557.820us 00:30:46.712 98.00000% : 26901.240us 00:30:46.712 99.00000% : 29534.128us 00:30:46.712 99.50000% : 30907.808us 00:30:46.712 99.90000% : 32052.541us 00:30:46.712 99.99000% : 32281.488us 00:30:46.712 99.99900% : 32281.488us 00:30:46.712 99.99990% : 32281.488us 00:30:46.712 99.99999% : 32281.488us 00:30:46.712 00:30:46.712 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:46.712 ============================================================================== 00:30:46.712 Range in us Cumulative IO count 00:30:46.712 12649.307 - 12706.543: 0.0172% ( 1) 00:30:46.712 12706.543 - 12763.780: 0.0515% ( 2) 00:30:46.712 12763.780 - 12821.017: 0.0687% ( 1) 00:30:46.712 12821.017 - 12878.253: 0.1202% ( 3) 00:30:46.712 12878.253 - 12935.490: 0.1545% ( 2) 00:30:46.712 12935.490 - 12992.727: 0.2060% ( 3) 00:30:46.712 12992.727 - 13049.963: 0.2404% ( 2) 00:30:46.712 13049.963 - 13107.200: 0.4636% ( 13) 00:30:46.712 13107.200 - 13164.437: 0.5666% ( 6) 00:30:46.712 13164.437 - 13221.673: 0.7212% ( 9) 00:30:46.712 13221.673 - 13278.910: 0.8585% ( 8) 00:30:46.712 13278.910 - 13336.147: 0.9787% ( 7) 00:30:46.712 13336.147 - 13393.383: 1.1848% ( 12) 00:30:46.712 13393.383 - 13450.620: 1.3736% ( 11) 00:30:46.712 13450.620 - 13507.857: 1.5797% ( 12) 00:30:46.712 13507.857 - 13565.093: 1.8716% ( 17) 00:30:46.712 13565.093 - 13622.330: 2.2150% ( 20) 00:30:46.712 13622.330 - 13679.567: 2.3867% ( 10) 00:30:46.712 13679.567 - 13736.803: 2.5240% ( 8) 00:30:46.712 13736.803 - 13794.040: 2.7988% ( 16) 00:30:46.712 13794.040 - 13851.277: 2.9876% ( 11) 00:30:46.712 13851.277 - 13908.514: 3.1937% ( 12) 00:30:46.712 13908.514 - 13965.750: 3.3997% ( 12) 00:30:46.712 13965.750 - 14022.987: 3.5886% ( 11) 00:30:46.712 14022.987 - 14080.224: 3.7946% ( 12) 00:30:46.712 14080.224 - 14137.460: 4.0350% ( 14) 00:30:46.712 14137.460 - 14194.697: 4.3098% ( 16) 00:30:46.712 14194.697 - 14251.934: 4.6016% ( 17) 00:30:46.712 14251.934 - 14309.170: 4.8764% ( 16) 00:30:46.712 14309.170 - 14366.407: 5.1168% ( 14) 00:30:46.712 14366.407 - 14423.644: 5.4087% ( 17) 00:30:46.712 14423.644 - 14480.880: 5.6834% ( 16) 00:30:46.712 14480.880 - 14538.117: 5.9409% ( 15) 00:30:46.712 14538.117 - 14595.354: 6.3874% ( 26) 00:30:46.712 14595.354 - 14652.590: 6.7651% ( 22) 00:30:46.712 14652.590 - 14767.064: 7.6065% ( 49) 00:30:46.712 14767.064 - 14881.537: 8.4306% ( 48) 00:30:46.712 14881.537 - 14996.010: 9.3063% ( 51) 00:30:46.712 14996.010 - 15110.484: 10.0103% ( 41) 00:30:46.712 15110.484 - 15224.957: 10.7315% ( 42) 00:30:46.712 15224.957 - 15339.431: 11.3324% ( 35) 00:30:46.712 15339.431 - 15453.904: 11.8819% ( 32) 00:30:46.712 15453.904 - 15568.377: 12.3283% ( 26) 00:30:46.712 15568.377 - 15682.851: 12.8262% ( 29) 00:30:46.712 15682.851 - 15797.324: 13.4444% ( 36) 00:30:46.712 15797.324 - 15911.797: 14.1655% ( 42) 00:30:46.712 15911.797 - 16026.271: 14.9382% ( 45) 00:30:46.712 16026.271 - 16140.744: 15.4361% ( 29) 00:30:46.712 16140.744 - 16255.217: 16.0371% ( 35) 00:30:46.712 16255.217 - 16369.691: 16.5865% ( 32) 00:30:46.712 16369.691 - 16484.164: 17.1703% ( 34) 00:30:46.712 16484.164 - 16598.638: 17.7885% ( 36) 00:30:46.712 16598.638 - 16713.111: 18.4581% ( 39) 00:30:46.712 16713.111 - 16827.584: 19.0934% ( 37) 00:30:46.712 16827.584 - 16942.058: 19.6772% ( 34) 00:30:46.712 16942.058 - 17056.531: 20.2266% ( 32) 00:30:46.712 17056.531 - 17171.004: 20.8791% ( 38) 00:30:46.712 17171.004 - 17285.478: 21.5144% ( 37) 00:30:46.712 17285.478 - 17399.951: 22.0982% ( 34) 00:30:46.713 17399.951 - 17514.424: 22.5962% ( 29) 00:30:46.713 17514.424 - 17628.898: 23.2315% ( 37) 00:30:46.713 17628.898 - 17743.371: 24.1415% ( 53) 00:30:46.713 17743.371 - 17857.845: 24.9485% ( 47) 00:30:46.713 17857.845 - 17972.318: 25.6353% ( 40) 00:30:46.713 17972.318 - 18086.791: 26.2191% ( 34) 00:30:46.713 18086.791 - 18201.265: 26.8029% ( 34) 00:30:46.713 18201.265 - 18315.738: 27.3352% ( 31) 00:30:46.713 18315.738 - 18430.211: 27.6957% ( 21) 00:30:46.713 18430.211 - 18544.685: 28.1250% ( 25) 00:30:46.713 18544.685 - 18659.158: 28.6745% ( 32) 00:30:46.713 18659.158 - 18773.631: 29.0865% ( 24) 00:30:46.713 18773.631 - 18888.105: 29.6016% ( 30) 00:30:46.713 18888.105 - 19002.578: 30.0309% ( 25) 00:30:46.713 19002.578 - 19117.052: 30.4773% ( 26) 00:30:46.713 19117.052 - 19231.525: 30.9581% ( 28) 00:30:46.713 19231.525 - 19345.998: 31.3874% ( 25) 00:30:46.713 19345.998 - 19460.472: 31.7823% ( 23) 00:30:46.713 19460.472 - 19574.945: 32.1257% ( 20) 00:30:46.713 19574.945 - 19689.418: 32.4348% ( 18) 00:30:46.713 19689.418 - 19803.892: 32.6065% ( 10) 00:30:46.713 19803.892 - 19918.365: 32.8125% ( 12) 00:30:46.713 19918.365 - 20032.838: 32.9327% ( 7) 00:30:46.713 20032.838 - 20147.312: 33.1387% ( 12) 00:30:46.713 20147.312 - 20261.785: 33.2418% ( 6) 00:30:46.713 20261.785 - 20376.259: 33.3963% ( 9) 00:30:46.713 20376.259 - 20490.732: 33.5680% ( 10) 00:30:46.713 20490.732 - 20605.205: 33.8084% ( 14) 00:30:46.713 20605.205 - 20719.679: 34.0488% ( 14) 00:30:46.713 20719.679 - 20834.152: 34.3235% ( 16) 00:30:46.713 20834.152 - 20948.625: 34.5982% ( 16) 00:30:46.713 20948.625 - 21063.099: 34.7356% ( 8) 00:30:46.713 21063.099 - 21177.572: 34.9588% ( 13) 00:30:46.713 21177.572 - 21292.045: 35.1305% ( 10) 00:30:46.713 21292.045 - 21406.519: 35.2679% ( 8) 00:30:46.713 21406.519 - 21520.992: 35.4739% ( 12) 00:30:46.713 21520.992 - 21635.466: 35.6456% ( 10) 00:30:46.713 21635.466 - 21749.939: 35.8345% ( 11) 00:30:46.713 21749.939 - 21864.412: 36.0577% ( 13) 00:30:46.713 21864.412 - 21978.886: 36.6243% ( 33) 00:30:46.713 21978.886 - 22093.359: 36.9849% ( 21) 00:30:46.713 22093.359 - 22207.832: 37.4313% ( 26) 00:30:46.713 22207.832 - 22322.306: 37.9293% ( 29) 00:30:46.713 22322.306 - 22436.779: 38.2898% ( 21) 00:30:46.713 22436.779 - 22551.252: 38.8908% ( 35) 00:30:46.713 22551.252 - 22665.726: 39.4746% ( 34) 00:30:46.713 22665.726 - 22780.199: 40.0927% ( 36) 00:30:46.713 22780.199 - 22894.672: 40.9169% ( 48) 00:30:46.713 22894.672 - 23009.146: 42.7541% ( 107) 00:30:46.713 23009.146 - 23123.619: 45.0206% ( 132) 00:30:46.713 23123.619 - 23238.093: 46.8235% ( 105) 00:30:46.713 23238.093 - 23352.566: 49.1071% ( 133) 00:30:46.713 23352.566 - 23467.039: 51.2534% ( 125) 00:30:46.713 23467.039 - 23581.513: 53.8633% ( 152) 00:30:46.713 23581.513 - 23695.986: 55.4258% ( 91) 00:30:46.713 23695.986 - 23810.459: 56.9883% ( 91) 00:30:46.713 23810.459 - 23924.933: 58.5337% ( 90) 00:30:46.713 23924.933 - 24039.406: 59.9245% ( 81) 00:30:46.713 24039.406 - 24153.879: 61.2466% ( 77) 00:30:46.713 24153.879 - 24268.353: 62.5859% ( 78) 00:30:46.713 24268.353 - 24382.826: 63.9251% ( 78) 00:30:46.713 24382.826 - 24497.300: 65.1786% ( 73) 00:30:46.713 24497.300 - 24611.773: 66.4148% ( 72) 00:30:46.713 24611.773 - 24726.246: 67.7198% ( 76) 00:30:46.713 24726.246 - 24840.720: 69.1621% ( 84) 00:30:46.713 24840.720 - 24955.193: 70.5872% ( 83) 00:30:46.713 24955.193 - 25069.666: 71.9265% ( 78) 00:30:46.713 25069.666 - 25184.140: 73.3345% ( 82) 00:30:46.713 25184.140 - 25298.613: 74.9485% ( 94) 00:30:46.713 25298.613 - 25413.086: 76.1332% ( 69) 00:30:46.713 25413.086 - 25527.560: 77.7301% ( 93) 00:30:46.713 25527.560 - 25642.033: 79.0865% ( 79) 00:30:46.713 25642.033 - 25756.507: 80.4602% ( 80) 00:30:46.713 25756.507 - 25870.980: 81.9025% ( 84) 00:30:46.713 25870.980 - 25985.453: 83.3620% ( 85) 00:30:46.713 25985.453 - 26099.927: 84.6154% ( 73) 00:30:46.713 26099.927 - 26214.400: 85.9547% ( 78) 00:30:46.713 26214.400 - 26328.873: 87.2253% ( 74) 00:30:46.713 26328.873 - 26443.347: 88.5646% ( 78) 00:30:46.713 26443.347 - 26557.820: 89.6978% ( 66) 00:30:46.713 26557.820 - 26672.293: 91.1401% ( 84) 00:30:46.713 26672.293 - 26786.767: 92.1016% ( 56) 00:30:46.713 26786.767 - 26901.240: 93.4238% ( 77) 00:30:46.713 26901.240 - 27015.714: 94.6429% ( 71) 00:30:46.713 27015.714 - 27130.187: 95.6559% ( 59) 00:30:46.713 27130.187 - 27244.660: 96.4457% ( 46) 00:30:46.713 27244.660 - 27359.134: 97.0982% ( 38) 00:30:46.713 27359.134 - 27473.607: 97.4760% ( 22) 00:30:46.713 27473.607 - 27588.080: 97.5962% ( 7) 00:30:46.713 27588.080 - 27702.554: 97.6820% ( 5) 00:30:46.713 27702.554 - 27817.027: 97.7335% ( 3) 00:30:46.713 27817.027 - 27931.500: 97.7679% ( 2) 00:30:46.713 27931.500 - 28045.974: 97.8022% ( 2) 00:30:46.713 35944.636 - 36173.583: 97.9224% ( 7) 00:30:46.713 36173.583 - 36402.529: 98.0254% ( 6) 00:30:46.713 36402.529 - 36631.476: 98.0941% ( 4) 00:30:46.713 36631.476 - 36860.423: 98.2315% ( 8) 00:30:46.713 36860.423 - 37089.369: 98.3345% ( 6) 00:30:46.713 37089.369 - 37318.316: 98.4547% ( 7) 00:30:46.713 37318.316 - 37547.263: 98.5577% ( 6) 00:30:46.713 37547.263 - 37776.210: 98.6779% ( 7) 00:30:46.713 37776.210 - 38005.156: 98.7637% ( 5) 00:30:46.713 38005.156 - 38234.103: 98.8839% ( 7) 00:30:46.713 38234.103 - 38463.050: 98.9011% ( 1) 00:30:46.713 47163.025 - 47391.972: 98.9526% ( 3) 00:30:46.713 47391.972 - 47620.919: 99.0385% ( 5) 00:30:46.713 47620.919 - 47849.866: 99.1243% ( 5) 00:30:46.713 47849.866 - 48078.812: 99.1930% ( 4) 00:30:46.713 48078.812 - 48307.759: 99.2445% ( 3) 00:30:46.713 48307.759 - 48536.706: 99.3304% ( 5) 00:30:46.713 48536.706 - 48765.652: 99.3990% ( 4) 00:30:46.713 48765.652 - 48994.599: 99.4677% ( 4) 00:30:46.713 48994.599 - 49223.546: 99.5536% ( 5) 00:30:46.713 49223.546 - 49452.493: 99.6223% ( 4) 00:30:46.713 49452.493 - 49681.439: 99.6909% ( 4) 00:30:46.713 49681.439 - 49910.386: 99.7596% ( 4) 00:30:46.713 49910.386 - 50139.333: 99.8455% ( 5) 00:30:46.713 50139.333 - 50368.279: 99.9141% ( 4) 00:30:46.713 50368.279 - 50597.226: 99.9828% ( 4) 00:30:46.713 50597.226 - 50826.173: 100.0000% ( 1) 00:30:46.713 00:30:46.713 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:46.713 ============================================================================== 00:30:46.713 Range in us Cumulative IO count 00:30:46.713 12992.727 - 13049.963: 0.0859% ( 5) 00:30:46.713 13049.963 - 13107.200: 0.1545% ( 4) 00:30:46.713 13107.200 - 13164.437: 0.2404% ( 5) 00:30:46.713 13164.437 - 13221.673: 0.3262% ( 5) 00:30:46.713 13221.673 - 13278.910: 0.4979% ( 10) 00:30:46.713 13278.910 - 13336.147: 0.7383% ( 14) 00:30:46.713 13336.147 - 13393.383: 0.8757% ( 8) 00:30:46.713 13393.383 - 13450.620: 1.0130% ( 8) 00:30:46.713 13450.620 - 13507.857: 1.1676% ( 9) 00:30:46.713 13507.857 - 13565.093: 1.2706% ( 6) 00:30:46.713 13565.093 - 13622.330: 1.4938% ( 13) 00:30:46.713 13622.330 - 13679.567: 1.7514% ( 15) 00:30:46.713 13679.567 - 13736.803: 2.0604% ( 18) 00:30:46.713 13736.803 - 13794.040: 2.3867% ( 19) 00:30:46.713 13794.040 - 13851.277: 2.8503% ( 27) 00:30:46.713 13851.277 - 13908.514: 3.2452% ( 23) 00:30:46.713 13908.514 - 13965.750: 3.8462% ( 35) 00:30:46.713 13965.750 - 14022.987: 4.3784% ( 31) 00:30:46.713 14022.987 - 14080.224: 5.2370% ( 50) 00:30:46.713 14080.224 - 14137.460: 5.5117% ( 16) 00:30:46.713 14137.460 - 14194.697: 5.7521% ( 14) 00:30:46.713 14194.697 - 14251.934: 6.0440% ( 17) 00:30:46.713 14251.934 - 14309.170: 6.3015% ( 15) 00:30:46.713 14309.170 - 14366.407: 6.6277% ( 19) 00:30:46.713 14366.407 - 14423.644: 6.9540% ( 19) 00:30:46.713 14423.644 - 14480.880: 7.2802% ( 19) 00:30:46.713 14480.880 - 14538.117: 7.6065% ( 19) 00:30:46.713 14538.117 - 14595.354: 7.9155% ( 18) 00:30:46.713 14595.354 - 14652.590: 8.1216% ( 12) 00:30:46.713 14652.590 - 14767.064: 8.4306% ( 18) 00:30:46.713 14767.064 - 14881.537: 8.8771% ( 26) 00:30:46.713 14881.537 - 14996.010: 9.5467% ( 39) 00:30:46.713 14996.010 - 15110.484: 10.1305% ( 34) 00:30:46.713 15110.484 - 15224.957: 10.6113% ( 28) 00:30:46.713 15224.957 - 15339.431: 11.0920% ( 28) 00:30:46.713 15339.431 - 15453.904: 12.2081% ( 65) 00:30:46.713 15453.904 - 15568.377: 12.7576% ( 32) 00:30:46.713 15568.377 - 15682.851: 13.2212% ( 27) 00:30:46.713 15682.851 - 15797.324: 13.6332% ( 24) 00:30:46.713 15797.324 - 15911.797: 14.0453% ( 24) 00:30:46.713 15911.797 - 16026.271: 14.7150% ( 39) 00:30:46.713 16026.271 - 16140.744: 15.4705% ( 44) 00:30:46.713 16140.744 - 16255.217: 16.3118% ( 49) 00:30:46.713 16255.217 - 16369.691: 17.1016% ( 46) 00:30:46.713 16369.691 - 16484.164: 17.7541% ( 38) 00:30:46.713 16484.164 - 16598.638: 18.3379% ( 34) 00:30:46.713 16598.638 - 16713.111: 18.9389% ( 35) 00:30:46.713 16713.111 - 16827.584: 19.8832% ( 55) 00:30:46.713 16827.584 - 16942.058: 20.5185% ( 37) 00:30:46.713 16942.058 - 17056.531: 20.9478% ( 25) 00:30:46.713 17056.531 - 17171.004: 21.3942% ( 26) 00:30:46.713 17171.004 - 17285.478: 21.7891% ( 23) 00:30:46.713 17285.478 - 17399.951: 22.1841% ( 23) 00:30:46.713 17399.951 - 17514.424: 22.6305% ( 26) 00:30:46.713 17514.424 - 17628.898: 23.0254% ( 23) 00:30:46.713 17628.898 - 17743.371: 23.2658% ( 14) 00:30:46.713 17743.371 - 17857.845: 23.5405% ( 16) 00:30:46.713 17857.845 - 17972.318: 24.0556% ( 30) 00:30:46.713 17972.318 - 18086.791: 24.6566% ( 35) 00:30:46.713 18086.791 - 18201.265: 25.1889% ( 31) 00:30:46.713 18201.265 - 18315.738: 25.6181% ( 25) 00:30:46.713 18315.738 - 18430.211: 26.0817% ( 27) 00:30:46.713 18430.211 - 18544.685: 26.5282% ( 26) 00:30:46.714 18544.685 - 18659.158: 27.0604% ( 31) 00:30:46.714 18659.158 - 18773.631: 27.6786% ( 36) 00:30:46.714 18773.631 - 18888.105: 28.3826% ( 41) 00:30:46.714 18888.105 - 19002.578: 28.9320% ( 32) 00:30:46.714 19002.578 - 19117.052: 29.5158% ( 34) 00:30:46.714 19117.052 - 19231.525: 30.0309% ( 30) 00:30:46.714 19231.525 - 19345.998: 30.5975% ( 33) 00:30:46.714 19345.998 - 19460.472: 31.0268% ( 25) 00:30:46.714 19460.472 - 19574.945: 31.3702% ( 20) 00:30:46.714 19574.945 - 19689.418: 31.6106% ( 14) 00:30:46.714 19689.418 - 19803.892: 32.4004% ( 46) 00:30:46.714 19803.892 - 19918.365: 32.7782% ( 22) 00:30:46.714 19918.365 - 20032.838: 33.1216% ( 20) 00:30:46.714 20032.838 - 20147.312: 33.5165% ( 23) 00:30:46.714 20147.312 - 20261.785: 33.9286% ( 24) 00:30:46.714 20261.785 - 20376.259: 34.3063% ( 22) 00:30:46.714 20376.259 - 20490.732: 34.4609% ( 9) 00:30:46.714 20490.732 - 20605.205: 34.5639% ( 6) 00:30:46.714 20605.205 - 20719.679: 34.6669% ( 6) 00:30:46.714 20719.679 - 20834.152: 34.7699% ( 6) 00:30:46.714 20834.152 - 20948.625: 34.8214% ( 3) 00:30:46.714 20948.625 - 21063.099: 34.8558% ( 2) 00:30:46.714 21063.099 - 21177.572: 34.9073% ( 3) 00:30:46.714 21177.572 - 21292.045: 34.9416% ( 2) 00:30:46.714 21292.045 - 21406.519: 34.9931% ( 3) 00:30:46.714 21406.519 - 21520.992: 35.1648% ( 10) 00:30:46.714 21520.992 - 21635.466: 35.3709% ( 12) 00:30:46.714 21635.466 - 21749.939: 35.5769% ( 12) 00:30:46.714 21749.939 - 21864.412: 35.7143% ( 8) 00:30:46.714 21864.412 - 21978.886: 35.7486% ( 2) 00:30:46.714 21978.886 - 22093.359: 35.8860% ( 8) 00:30:46.714 22093.359 - 22207.832: 36.1607% ( 16) 00:30:46.714 22207.832 - 22322.306: 36.4526% ( 17) 00:30:46.714 22322.306 - 22436.779: 36.7617% ( 18) 00:30:46.714 22436.779 - 22551.252: 37.1394% ( 22) 00:30:46.714 22551.252 - 22665.726: 37.5000% ( 21) 00:30:46.714 22665.726 - 22780.199: 37.9293% ( 25) 00:30:46.714 22780.199 - 22894.672: 38.3757% ( 26) 00:30:46.714 22894.672 - 23009.146: 38.9423% ( 33) 00:30:46.714 23009.146 - 23123.619: 39.5948% ( 38) 00:30:46.714 23123.619 - 23238.093: 40.2644% ( 39) 00:30:46.714 23238.093 - 23352.566: 41.0886% ( 48) 00:30:46.714 23352.566 - 23467.039: 42.3592% ( 74) 00:30:46.714 23467.039 - 23581.513: 43.7672% ( 82) 00:30:46.714 23581.513 - 23695.986: 45.5357% ( 103) 00:30:46.714 23695.986 - 23810.459: 47.5790% ( 119) 00:30:46.714 23810.459 - 23924.933: 50.1030% ( 147) 00:30:46.714 23924.933 - 24039.406: 52.6099% ( 146) 00:30:46.714 24039.406 - 24153.879: 54.9966% ( 139) 00:30:46.714 24153.879 - 24268.353: 57.6751% ( 156) 00:30:46.714 24268.353 - 24382.826: 60.0446% ( 138) 00:30:46.714 24382.826 - 24497.300: 62.6202% ( 150) 00:30:46.714 24497.300 - 24611.773: 65.0412% ( 141) 00:30:46.714 24611.773 - 24726.246: 67.7541% ( 158) 00:30:46.714 24726.246 - 24840.720: 70.9650% ( 187) 00:30:46.714 24840.720 - 24955.193: 73.9698% ( 175) 00:30:46.714 24955.193 - 25069.666: 76.3736% ( 140) 00:30:46.714 25069.666 - 25184.140: 78.3482% ( 115) 00:30:46.714 25184.140 - 25298.613: 80.0996% ( 102) 00:30:46.714 25298.613 - 25413.086: 81.6621% ( 91) 00:30:46.714 25413.086 - 25527.560: 83.1731% ( 88) 00:30:46.714 25527.560 - 25642.033: 84.5467% ( 80) 00:30:46.714 25642.033 - 25756.507: 86.0405% ( 87) 00:30:46.714 25756.507 - 25870.980: 87.4485% ( 82) 00:30:46.714 25870.980 - 25985.453: 88.7706% ( 77) 00:30:46.714 25985.453 - 26099.927: 90.2473% ( 86) 00:30:46.714 26099.927 - 26214.400: 91.5350% ( 75) 00:30:46.714 26214.400 - 26328.873: 92.7370% ( 70) 00:30:46.714 26328.873 - 26443.347: 93.8530% ( 65) 00:30:46.714 26443.347 - 26557.820: 94.7974% ( 55) 00:30:46.714 26557.820 - 26672.293: 95.7246% ( 54) 00:30:46.714 26672.293 - 26786.767: 96.3599% ( 37) 00:30:46.714 26786.767 - 26901.240: 96.9093% ( 32) 00:30:46.714 26901.240 - 27015.714: 97.2699% ( 21) 00:30:46.714 27015.714 - 27130.187: 97.4760% ( 12) 00:30:46.714 27130.187 - 27244.660: 97.5446% ( 4) 00:30:46.714 27244.660 - 27359.134: 97.6133% ( 4) 00:30:46.714 27359.134 - 27473.607: 97.6477% ( 2) 00:30:46.714 27473.607 - 27588.080: 97.6820% ( 2) 00:30:46.714 27588.080 - 27702.554: 97.6992% ( 1) 00:30:46.714 27702.554 - 27817.027: 97.7335% ( 2) 00:30:46.714 27817.027 - 27931.500: 97.7679% ( 2) 00:30:46.714 27931.500 - 28045.974: 97.8022% ( 2) 00:30:46.714 33655.169 - 33884.115: 97.8709% ( 4) 00:30:46.714 33884.115 - 34113.062: 97.9739% ( 6) 00:30:46.714 34113.062 - 34342.009: 98.0941% ( 7) 00:30:46.714 34342.009 - 34570.955: 98.2143% ( 7) 00:30:46.714 34570.955 - 34799.902: 98.3173% ( 6) 00:30:46.714 34799.902 - 35028.849: 98.4375% ( 7) 00:30:46.714 35028.849 - 35257.796: 98.5577% ( 7) 00:30:46.714 35257.796 - 35486.742: 98.6607% ( 6) 00:30:46.714 35486.742 - 35715.689: 98.7637% ( 6) 00:30:46.714 35715.689 - 35944.636: 98.8839% ( 7) 00:30:46.714 35944.636 - 36173.583: 98.9011% ( 1) 00:30:46.714 45331.452 - 45560.398: 98.9183% ( 1) 00:30:46.714 45560.398 - 45789.345: 98.9698% ( 3) 00:30:46.714 45789.345 - 46018.292: 99.0556% ( 5) 00:30:46.714 46018.292 - 46247.238: 99.1415% ( 5) 00:30:46.714 46247.238 - 46476.185: 99.2102% ( 4) 00:30:46.714 46476.185 - 46705.132: 99.2788% ( 4) 00:30:46.714 46705.132 - 46934.079: 99.3647% ( 5) 00:30:46.714 46934.079 - 47163.025: 99.4334% ( 4) 00:30:46.714 47163.025 - 47391.972: 99.5192% ( 5) 00:30:46.714 47391.972 - 47620.919: 99.6051% ( 5) 00:30:46.714 47620.919 - 47849.866: 99.6909% ( 5) 00:30:46.714 47849.866 - 48078.812: 99.7596% ( 4) 00:30:46.714 48078.812 - 48307.759: 99.8455% ( 5) 00:30:46.714 48307.759 - 48536.706: 99.9313% ( 5) 00:30:46.714 48536.706 - 48765.652: 100.0000% ( 4) 00:30:46.714 00:30:46.714 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:46.714 ============================================================================== 00:30:46.714 Range in us Cumulative IO count 00:30:46.714 12706.543 - 12763.780: 0.0172% ( 1) 00:30:46.714 12878.253 - 12935.490: 0.0687% ( 3) 00:30:46.714 12935.490 - 12992.727: 0.1202% ( 3) 00:30:46.714 12992.727 - 13049.963: 0.2060% ( 5) 00:30:46.714 13049.963 - 13107.200: 0.2919% ( 5) 00:30:46.714 13107.200 - 13164.437: 0.4464% ( 9) 00:30:46.714 13164.437 - 13221.673: 0.7212% ( 16) 00:30:46.714 13221.673 - 13278.910: 0.8585% ( 8) 00:30:46.714 13278.910 - 13336.147: 0.9444% ( 5) 00:30:46.714 13336.147 - 13393.383: 1.0302% ( 5) 00:30:46.714 13393.383 - 13450.620: 1.1848% ( 9) 00:30:46.714 13450.620 - 13507.857: 1.3393% ( 9) 00:30:46.714 13507.857 - 13565.093: 1.5453% ( 12) 00:30:46.714 13565.093 - 13622.330: 1.7342% ( 11) 00:30:46.714 13622.330 - 13679.567: 2.0261% ( 17) 00:30:46.714 13679.567 - 13736.803: 2.4038% ( 22) 00:30:46.714 13736.803 - 13794.040: 2.7301% ( 19) 00:30:46.714 13794.040 - 13851.277: 3.5199% ( 46) 00:30:46.714 13851.277 - 13908.514: 4.1896% ( 39) 00:30:46.714 13908.514 - 13965.750: 4.7734% ( 34) 00:30:46.714 13965.750 - 14022.987: 5.2198% ( 26) 00:30:46.714 14022.987 - 14080.224: 5.5460% ( 19) 00:30:46.714 14080.224 - 14137.460: 5.8036% ( 15) 00:30:46.714 14137.460 - 14194.697: 6.0611% ( 15) 00:30:46.714 14194.697 - 14251.934: 6.2843% ( 13) 00:30:46.714 14251.934 - 14309.170: 6.4560% ( 10) 00:30:46.714 14309.170 - 14366.407: 6.6621% ( 12) 00:30:46.714 14366.407 - 14423.644: 6.8510% ( 11) 00:30:46.714 14423.644 - 14480.880: 7.0398% ( 11) 00:30:46.714 14480.880 - 14538.117: 7.3317% ( 17) 00:30:46.714 14538.117 - 14595.354: 7.8297% ( 29) 00:30:46.714 14595.354 - 14652.590: 8.0872% ( 15) 00:30:46.714 14652.590 - 14767.064: 8.3963% ( 18) 00:30:46.714 14767.064 - 14881.537: 8.8255% ( 25) 00:30:46.714 14881.537 - 14996.010: 9.3235% ( 29) 00:30:46.714 14996.010 - 15110.484: 9.8043% ( 28) 00:30:46.714 15110.484 - 15224.957: 10.5769% ( 45) 00:30:46.714 15224.957 - 15339.431: 11.4183% ( 49) 00:30:46.714 15339.431 - 15453.904: 12.1051% ( 40) 00:30:46.714 15453.904 - 15568.377: 12.7576% ( 38) 00:30:46.714 15568.377 - 15682.851: 13.4615% ( 41) 00:30:46.714 15682.851 - 15797.324: 14.3201% ( 50) 00:30:46.714 15797.324 - 15911.797: 15.0927% ( 45) 00:30:46.714 15911.797 - 16026.271: 15.8139% ( 42) 00:30:46.714 16026.271 - 16140.744: 16.3462% ( 31) 00:30:46.714 16140.744 - 16255.217: 16.9128% ( 33) 00:30:46.714 16255.217 - 16369.691: 17.5137% ( 35) 00:30:46.714 16369.691 - 16484.164: 18.0632% ( 32) 00:30:46.714 16484.164 - 16598.638: 18.5096% ( 26) 00:30:46.714 16598.638 - 16713.111: 19.0419% ( 31) 00:30:46.714 16713.111 - 16827.584: 19.7459% ( 41) 00:30:46.714 16827.584 - 16942.058: 20.2782% ( 31) 00:30:46.714 16942.058 - 17056.531: 20.7246% ( 26) 00:30:46.714 17056.531 - 17171.004: 21.1882% ( 27) 00:30:46.714 17171.004 - 17285.478: 21.5316% ( 20) 00:30:46.714 17285.478 - 17399.951: 21.9093% ( 22) 00:30:46.714 17399.951 - 17514.424: 22.1497% ( 14) 00:30:46.714 17514.424 - 17628.898: 22.3901% ( 14) 00:30:46.714 17628.898 - 17743.371: 22.6133% ( 13) 00:30:46.714 17743.371 - 17857.845: 22.8022% ( 11) 00:30:46.714 17857.845 - 17972.318: 23.0082% ( 12) 00:30:46.714 17972.318 - 18086.791: 23.3345% ( 19) 00:30:46.714 18086.791 - 18201.265: 23.9526% ( 36) 00:30:46.714 18201.265 - 18315.738: 24.6909% ( 43) 00:30:46.714 18315.738 - 18430.211: 25.1717% ( 28) 00:30:46.714 18430.211 - 18544.685: 25.7212% ( 32) 00:30:46.714 18544.685 - 18659.158: 26.3565% ( 37) 00:30:46.714 18659.158 - 18773.631: 27.0261% ( 39) 00:30:46.714 18773.631 - 18888.105: 28.0735% ( 61) 00:30:46.714 18888.105 - 19002.578: 28.8977% ( 48) 00:30:46.714 19002.578 - 19117.052: 29.4299% ( 31) 00:30:46.714 19117.052 - 19231.525: 30.1168% ( 40) 00:30:46.714 19231.525 - 19345.998: 30.4602% ( 20) 00:30:46.714 19345.998 - 19460.472: 30.8036% ( 20) 00:30:46.715 19460.472 - 19574.945: 31.1813% ( 22) 00:30:46.715 19574.945 - 19689.418: 31.7479% ( 33) 00:30:46.715 19689.418 - 19803.892: 32.1085% ( 21) 00:30:46.715 19803.892 - 19918.365: 32.4348% ( 19) 00:30:46.715 19918.365 - 20032.838: 32.7782% ( 20) 00:30:46.715 20032.838 - 20147.312: 33.0872% ( 18) 00:30:46.715 20147.312 - 20261.785: 33.3448% ( 15) 00:30:46.715 20261.785 - 20376.259: 33.7225% ( 22) 00:30:46.715 20376.259 - 20490.732: 34.1518% ( 25) 00:30:46.715 20490.732 - 20605.205: 34.5467% ( 23) 00:30:46.715 20605.205 - 20719.679: 34.7871% ( 14) 00:30:46.715 20719.679 - 20834.152: 34.9245% ( 8) 00:30:46.715 20834.152 - 20948.625: 35.0446% ( 7) 00:30:46.715 20948.625 - 21063.099: 35.1648% ( 7) 00:30:46.715 21063.099 - 21177.572: 35.3022% ( 8) 00:30:46.715 21177.572 - 21292.045: 35.4396% ( 8) 00:30:46.715 21292.045 - 21406.519: 35.5254% ( 5) 00:30:46.715 21406.519 - 21520.992: 35.6284% ( 6) 00:30:46.715 21520.992 - 21635.466: 35.7143% ( 5) 00:30:46.715 21635.466 - 21749.939: 35.8001% ( 5) 00:30:46.715 21749.939 - 21864.412: 35.8516% ( 3) 00:30:46.715 21864.412 - 21978.886: 35.8860% ( 2) 00:30:46.715 21978.886 - 22093.359: 35.9203% ( 2) 00:30:46.715 22093.359 - 22207.832: 36.0405% ( 7) 00:30:46.715 22207.832 - 22322.306: 36.2294% ( 11) 00:30:46.715 22322.306 - 22436.779: 36.4870% ( 15) 00:30:46.715 22436.779 - 22551.252: 36.7960% ( 18) 00:30:46.715 22551.252 - 22665.726: 37.3283% ( 31) 00:30:46.715 22665.726 - 22780.199: 37.7576% ( 25) 00:30:46.715 22780.199 - 22894.672: 38.4444% ( 40) 00:30:46.715 22894.672 - 23009.146: 39.0968% ( 38) 00:30:46.715 23009.146 - 23123.619: 39.7665% ( 39) 00:30:46.715 23123.619 - 23238.093: 40.4190% ( 38) 00:30:46.715 23238.093 - 23352.566: 41.1229% ( 41) 00:30:46.715 23352.566 - 23467.039: 42.5824% ( 85) 00:30:46.715 23467.039 - 23581.513: 44.0247% ( 84) 00:30:46.715 23581.513 - 23695.986: 45.9135% ( 110) 00:30:46.715 23695.986 - 23810.459: 48.0082% ( 122) 00:30:46.715 23810.459 - 23924.933: 50.5151% ( 146) 00:30:46.715 23924.933 - 24039.406: 53.0563% ( 148) 00:30:46.715 24039.406 - 24153.879: 55.6319% ( 150) 00:30:46.715 24153.879 - 24268.353: 58.2246% ( 151) 00:30:46.715 24268.353 - 24382.826: 60.5598% ( 136) 00:30:46.715 24382.826 - 24497.300: 62.8091% ( 131) 00:30:46.715 24497.300 - 24611.773: 65.0412% ( 130) 00:30:46.715 24611.773 - 24726.246: 67.4966% ( 143) 00:30:46.715 24726.246 - 24840.720: 70.3297% ( 165) 00:30:46.715 24840.720 - 24955.193: 72.8194% ( 145) 00:30:46.715 24955.193 - 25069.666: 75.2232% ( 140) 00:30:46.715 25069.666 - 25184.140: 77.3008% ( 121) 00:30:46.715 25184.140 - 25298.613: 79.0350% ( 101) 00:30:46.715 25298.613 - 25413.086: 80.5804% ( 90) 00:30:46.715 25413.086 - 25527.560: 82.2115% ( 95) 00:30:46.715 25527.560 - 25642.033: 83.8084% ( 93) 00:30:46.715 25642.033 - 25756.507: 85.2850% ( 86) 00:30:46.715 25756.507 - 25870.980: 86.7102% ( 83) 00:30:46.715 25870.980 - 25985.453: 88.1696% ( 85) 00:30:46.715 25985.453 - 26099.927: 89.6806% ( 88) 00:30:46.715 26099.927 - 26214.400: 90.9684% ( 75) 00:30:46.715 26214.400 - 26328.873: 92.3077% ( 78) 00:30:46.715 26328.873 - 26443.347: 93.5783% ( 74) 00:30:46.715 26443.347 - 26557.820: 94.7974% ( 71) 00:30:46.715 26557.820 - 26672.293: 95.7589% ( 56) 00:30:46.715 26672.293 - 26786.767: 96.4629% ( 41) 00:30:46.715 26786.767 - 26901.240: 97.0295% ( 33) 00:30:46.715 26901.240 - 27015.714: 97.4073% ( 22) 00:30:46.715 27015.714 - 27130.187: 97.6133% ( 12) 00:30:46.715 27130.187 - 27244.660: 97.6648% ( 3) 00:30:46.715 27244.660 - 27359.134: 97.6992% ( 2) 00:30:46.715 27359.134 - 27473.607: 97.7335% ( 2) 00:30:46.715 27473.607 - 27588.080: 97.7850% ( 3) 00:30:46.715 27588.080 - 27702.554: 97.8022% ( 1) 00:30:46.715 32510.435 - 32739.382: 97.9224% ( 7) 00:30:46.715 32739.382 - 32968.328: 98.0254% ( 6) 00:30:46.715 32968.328 - 33197.275: 98.1456% ( 7) 00:30:46.715 33197.275 - 33426.222: 98.2830% ( 8) 00:30:46.715 33426.222 - 33655.169: 98.4032% ( 7) 00:30:46.715 33655.169 - 33884.115: 98.5234% ( 7) 00:30:46.715 33884.115 - 34113.062: 98.6435% ( 7) 00:30:46.715 34113.062 - 34342.009: 98.7637% ( 7) 00:30:46.715 34342.009 - 34570.955: 98.8839% ( 7) 00:30:46.715 34570.955 - 34799.902: 98.9011% ( 1) 00:30:46.715 43728.824 - 43957.771: 98.9526% ( 3) 00:30:46.715 43957.771 - 44186.718: 99.0213% ( 4) 00:30:46.715 44186.718 - 44415.665: 99.1071% ( 5) 00:30:46.715 44415.665 - 44644.611: 99.1758% ( 4) 00:30:46.715 44644.611 - 44873.558: 99.2617% ( 5) 00:30:46.715 44873.558 - 45102.505: 99.3475% ( 5) 00:30:46.715 45102.505 - 45331.452: 99.4162% ( 4) 00:30:46.715 45331.452 - 45560.398: 99.4849% ( 4) 00:30:46.715 45560.398 - 45789.345: 99.5707% ( 5) 00:30:46.715 45789.345 - 46018.292: 99.6566% ( 5) 00:30:46.715 46018.292 - 46247.238: 99.7424% ( 5) 00:30:46.715 46247.238 - 46476.185: 99.8283% ( 5) 00:30:46.715 46476.185 - 46705.132: 99.9141% ( 5) 00:30:46.715 46705.132 - 46934.079: 100.0000% ( 5) 00:30:46.715 00:30:46.715 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:46.715 ============================================================================== 00:30:46.715 Range in us Cumulative IO count 00:30:46.715 13164.437 - 13221.673: 0.0172% ( 1) 00:30:46.715 13336.147 - 13393.383: 0.0343% ( 1) 00:30:46.715 13393.383 - 13450.620: 0.2060% ( 10) 00:30:46.715 13450.620 - 13507.857: 0.3434% ( 8) 00:30:46.715 13507.857 - 13565.093: 0.6010% ( 15) 00:30:46.715 13565.093 - 13622.330: 0.9272% ( 19) 00:30:46.715 13622.330 - 13679.567: 1.5625% ( 37) 00:30:46.715 13679.567 - 13736.803: 2.3008% ( 43) 00:30:46.715 13736.803 - 13794.040: 2.7988% ( 29) 00:30:46.715 13794.040 - 13851.277: 3.2280% ( 25) 00:30:46.715 13851.277 - 13908.514: 3.6916% ( 27) 00:30:46.715 13908.514 - 13965.750: 4.1552% ( 27) 00:30:46.715 13965.750 - 14022.987: 4.6188% ( 27) 00:30:46.715 14022.987 - 14080.224: 5.3400% ( 42) 00:30:46.715 14080.224 - 14137.460: 5.8723% ( 31) 00:30:46.715 14137.460 - 14194.697: 6.1985% ( 19) 00:30:46.715 14194.697 - 14251.934: 6.4389% ( 14) 00:30:46.715 14251.934 - 14309.170: 6.6621% ( 13) 00:30:46.715 14309.170 - 14366.407: 6.9196% ( 15) 00:30:46.715 14366.407 - 14423.644: 7.1944% ( 16) 00:30:46.715 14423.644 - 14480.880: 7.4004% ( 12) 00:30:46.715 14480.880 - 14538.117: 7.6065% ( 12) 00:30:46.715 14538.117 - 14595.354: 8.0357% ( 25) 00:30:46.715 14595.354 - 14652.590: 8.3276% ( 17) 00:30:46.715 14652.590 - 14767.064: 8.6195% ( 17) 00:30:46.715 14767.064 - 14881.537: 8.8942% ( 16) 00:30:46.715 14881.537 - 14996.010: 9.2548% ( 21) 00:30:46.715 14996.010 - 15110.484: 9.6669% ( 24) 00:30:46.715 15110.484 - 15224.957: 10.2163% ( 32) 00:30:46.715 15224.957 - 15339.431: 10.7658% ( 32) 00:30:46.715 15339.431 - 15453.904: 11.2981% ( 31) 00:30:46.715 15453.904 - 15568.377: 11.8475% ( 32) 00:30:46.715 15568.377 - 15682.851: 12.5515% ( 41) 00:30:46.715 15682.851 - 15797.324: 13.3757% ( 48) 00:30:46.715 15797.324 - 15911.797: 14.2170% ( 49) 00:30:46.715 15911.797 - 16026.271: 14.8867% ( 39) 00:30:46.715 16026.271 - 16140.744: 15.4533% ( 33) 00:30:46.715 16140.744 - 16255.217: 16.0886% ( 37) 00:30:46.715 16255.217 - 16369.691: 16.8784% ( 46) 00:30:46.715 16369.691 - 16484.164: 17.6854% ( 47) 00:30:46.715 16484.164 - 16598.638: 18.2864% ( 35) 00:30:46.715 16598.638 - 16713.111: 18.8702% ( 34) 00:30:46.715 16713.111 - 16827.584: 19.4196% ( 32) 00:30:46.715 16827.584 - 16942.058: 20.2266% ( 47) 00:30:46.715 16942.058 - 17056.531: 20.6387% ( 24) 00:30:46.715 17056.531 - 17171.004: 21.1538% ( 30) 00:30:46.715 17171.004 - 17285.478: 21.6518% ( 29) 00:30:46.715 17285.478 - 17399.951: 22.0124% ( 21) 00:30:46.715 17399.951 - 17514.424: 22.3043% ( 17) 00:30:46.715 17514.424 - 17628.898: 22.6992% ( 23) 00:30:46.715 17628.898 - 17743.371: 23.3001% ( 35) 00:30:46.715 17743.371 - 17857.845: 24.2788% ( 57) 00:30:46.715 17857.845 - 17972.318: 25.2060% ( 54) 00:30:46.715 17972.318 - 18086.791: 25.6868% ( 28) 00:30:46.715 18086.791 - 18201.265: 26.0646% ( 22) 00:30:46.715 18201.265 - 18315.738: 26.4423% ( 22) 00:30:46.715 18315.738 - 18430.211: 26.8201% ( 22) 00:30:46.715 18430.211 - 18544.685: 27.2321% ( 24) 00:30:46.715 18544.685 - 18659.158: 27.5927% ( 21) 00:30:46.715 18659.158 - 18773.631: 28.3482% ( 44) 00:30:46.715 18773.631 - 18888.105: 28.8805% ( 31) 00:30:46.715 18888.105 - 19002.578: 29.4128% ( 31) 00:30:46.715 19002.578 - 19117.052: 29.8420% ( 25) 00:30:46.715 19117.052 - 19231.525: 30.5288% ( 40) 00:30:46.715 19231.525 - 19345.998: 30.9924% ( 27) 00:30:46.715 19345.998 - 19460.472: 31.3530% ( 21) 00:30:46.715 19460.472 - 19574.945: 31.7823% ( 25) 00:30:46.715 19574.945 - 19689.418: 32.0570% ( 16) 00:30:46.715 19689.418 - 19803.892: 32.7438% ( 40) 00:30:46.715 19803.892 - 19918.365: 32.9670% ( 13) 00:30:46.715 19918.365 - 20032.838: 33.2074% ( 14) 00:30:46.715 20032.838 - 20147.312: 33.4650% ( 15) 00:30:46.715 20147.312 - 20261.785: 33.7054% ( 14) 00:30:46.715 20261.785 - 20376.259: 33.9114% ( 12) 00:30:46.715 20376.259 - 20490.732: 34.1690% ( 15) 00:30:46.715 20490.732 - 20605.205: 34.3922% ( 13) 00:30:46.715 20605.205 - 20719.679: 34.6841% ( 17) 00:30:46.715 20719.679 - 20834.152: 34.8558% ( 10) 00:30:46.715 20834.152 - 20948.625: 35.0446% ( 11) 00:30:46.715 20948.625 - 21063.099: 35.2163% ( 10) 00:30:46.715 21063.099 - 21177.572: 35.5941% ( 22) 00:30:46.715 21177.572 - 21292.045: 35.7830% ( 11) 00:30:46.715 21292.045 - 21406.519: 35.8345% ( 3) 00:30:46.715 21406.519 - 21520.992: 35.9203% ( 5) 00:30:46.715 21520.992 - 21635.466: 35.9890% ( 4) 00:30:46.715 21635.466 - 21749.939: 36.0577% ( 4) 00:30:46.715 21749.939 - 21864.412: 36.1264% ( 4) 00:30:46.715 21864.412 - 21978.886: 36.2122% ( 5) 00:30:46.715 21978.886 - 22093.359: 36.2466% ( 2) 00:30:46.715 22093.359 - 22207.832: 36.2637% ( 1) 00:30:46.716 22207.832 - 22322.306: 36.2981% ( 2) 00:30:46.716 22322.306 - 22436.779: 36.3668% ( 4) 00:30:46.716 22436.779 - 22551.252: 36.5385% ( 10) 00:30:46.716 22551.252 - 22665.726: 36.7788% ( 14) 00:30:46.716 22665.726 - 22780.199: 37.0536% ( 16) 00:30:46.716 22780.199 - 22894.672: 37.3111% ( 15) 00:30:46.716 22894.672 - 23009.146: 37.8434% ( 31) 00:30:46.716 23009.146 - 23123.619: 38.4615% ( 36) 00:30:46.716 23123.619 - 23238.093: 39.1655% ( 41) 00:30:46.716 23238.093 - 23352.566: 40.1614% ( 58) 00:30:46.716 23352.566 - 23467.039: 41.3462% ( 69) 00:30:46.716 23467.039 - 23581.513: 43.1834% ( 107) 00:30:46.716 23581.513 - 23695.986: 45.1580% ( 115) 00:30:46.716 23695.986 - 23810.459: 47.6133% ( 143) 00:30:46.716 23810.459 - 23924.933: 50.1545% ( 148) 00:30:46.716 23924.933 - 24039.406: 52.7816% ( 153) 00:30:46.716 24039.406 - 24153.879: 55.2713% ( 145) 00:30:46.716 24153.879 - 24268.353: 57.5893% ( 135) 00:30:46.716 24268.353 - 24382.826: 59.8901% ( 134) 00:30:46.716 24382.826 - 24497.300: 62.0707% ( 127) 00:30:46.716 24497.300 - 24611.773: 64.5261% ( 143) 00:30:46.716 24611.773 - 24726.246: 67.0501% ( 147) 00:30:46.716 24726.246 - 24840.720: 69.7287% ( 156) 00:30:46.716 24840.720 - 24955.193: 71.9780% ( 131) 00:30:46.716 24955.193 - 25069.666: 74.2960% ( 135) 00:30:46.716 25069.666 - 25184.140: 76.4251% ( 124) 00:30:46.716 25184.140 - 25298.613: 78.2452% ( 106) 00:30:46.716 25298.613 - 25413.086: 80.0824% ( 107) 00:30:46.716 25413.086 - 25527.560: 81.8166% ( 101) 00:30:46.716 25527.560 - 25642.033: 83.4478% ( 95) 00:30:46.716 25642.033 - 25756.507: 85.0962% ( 96) 00:30:46.716 25756.507 - 25870.980: 86.4870% ( 81) 00:30:46.716 25870.980 - 25985.453: 88.1181% ( 95) 00:30:46.716 25985.453 - 26099.927: 89.4918% ( 80) 00:30:46.716 26099.927 - 26214.400: 90.8310% ( 78) 00:30:46.716 26214.400 - 26328.873: 92.2218% ( 81) 00:30:46.716 26328.873 - 26443.347: 93.5268% ( 76) 00:30:46.716 26443.347 - 26557.820: 94.7115% ( 69) 00:30:46.716 26557.820 - 26672.293: 95.8104% ( 64) 00:30:46.716 26672.293 - 26786.767: 96.5659% ( 44) 00:30:46.716 26786.767 - 26901.240: 97.1497% ( 34) 00:30:46.716 26901.240 - 27015.714: 97.5446% ( 23) 00:30:46.716 27015.714 - 27130.187: 97.6992% ( 9) 00:30:46.716 27130.187 - 27244.660: 97.7679% ( 4) 00:30:46.716 27244.660 - 27359.134: 97.8022% ( 2) 00:30:46.716 30449.914 - 30678.861: 97.8365% ( 2) 00:30:46.716 30678.861 - 30907.808: 97.9224% ( 5) 00:30:46.716 30907.808 - 31136.755: 97.9911% ( 4) 00:30:46.716 31136.755 - 31365.701: 98.0769% ( 5) 00:30:46.716 31365.701 - 31594.648: 98.1628% ( 5) 00:30:46.716 31594.648 - 31823.595: 98.2315% ( 4) 00:30:46.716 31823.595 - 32052.541: 98.3173% ( 5) 00:30:46.716 32052.541 - 32281.488: 98.3860% ( 4) 00:30:46.716 32281.488 - 32510.435: 98.4718% ( 5) 00:30:46.716 32510.435 - 32739.382: 98.5577% ( 5) 00:30:46.716 32739.382 - 32968.328: 98.6264% ( 4) 00:30:46.716 32968.328 - 33197.275: 98.7122% ( 5) 00:30:46.716 33197.275 - 33426.222: 98.7809% ( 4) 00:30:46.716 33426.222 - 33655.169: 98.8668% ( 5) 00:30:46.716 33655.169 - 33884.115: 98.9011% ( 2) 00:30:46.716 41897.251 - 42126.197: 98.9183% ( 1) 00:30:46.716 42126.197 - 42355.144: 98.9870% ( 4) 00:30:46.716 42355.144 - 42584.091: 99.0728% ( 5) 00:30:46.716 42584.091 - 42813.038: 99.1415% ( 4) 00:30:46.716 42813.038 - 43041.984: 99.2273% ( 5) 00:30:46.716 43041.984 - 43270.931: 99.2960% ( 4) 00:30:46.716 43270.931 - 43499.878: 99.3819% ( 5) 00:30:46.716 43499.878 - 43728.824: 99.4677% ( 5) 00:30:46.716 43728.824 - 43957.771: 99.5536% ( 5) 00:30:46.716 43957.771 - 44186.718: 99.6394% ( 5) 00:30:46.716 44186.718 - 44415.665: 99.7253% ( 5) 00:30:46.716 44415.665 - 44644.611: 99.8111% ( 5) 00:30:46.716 44644.611 - 44873.558: 99.8970% ( 5) 00:30:46.716 44873.558 - 45102.505: 99.9657% ( 4) 00:30:46.716 45102.505 - 45331.452: 100.0000% ( 2) 00:30:46.716 00:30:46.716 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:46.716 ============================================================================== 00:30:46.716 Range in us Cumulative IO count 00:30:46.716 13336.147 - 13393.383: 0.0343% ( 2) 00:30:46.716 13393.383 - 13450.620: 0.1374% ( 6) 00:30:46.716 13450.620 - 13507.857: 0.3091% ( 10) 00:30:46.716 13507.857 - 13565.093: 0.6010% ( 17) 00:30:46.716 13565.093 - 13622.330: 0.7898% ( 11) 00:30:46.716 13622.330 - 13679.567: 1.0474% ( 15) 00:30:46.716 13679.567 - 13736.803: 1.3565% ( 18) 00:30:46.716 13736.803 - 13794.040: 1.6827% ( 19) 00:30:46.716 13794.040 - 13851.277: 2.1463% ( 27) 00:30:46.716 13851.277 - 13908.514: 2.3867% ( 14) 00:30:46.716 13908.514 - 13965.750: 2.6957% ( 18) 00:30:46.716 13965.750 - 14022.987: 2.9361% ( 14) 00:30:46.716 14022.987 - 14080.224: 3.5714% ( 37) 00:30:46.716 14080.224 - 14137.460: 3.9835% ( 24) 00:30:46.716 14137.460 - 14194.697: 4.3784% ( 23) 00:30:46.716 14194.697 - 14251.934: 4.7218% ( 20) 00:30:46.716 14251.934 - 14309.170: 5.0652% ( 20) 00:30:46.716 14309.170 - 14366.407: 5.6147% ( 32) 00:30:46.716 14366.407 - 14423.644: 5.9753% ( 21) 00:30:46.716 14423.644 - 14480.880: 6.2500% ( 16) 00:30:46.716 14480.880 - 14538.117: 7.1257% ( 51) 00:30:46.716 14538.117 - 14595.354: 7.4004% ( 16) 00:30:46.716 14595.354 - 14652.590: 7.9155% ( 30) 00:30:46.716 14652.590 - 14767.064: 8.9457% ( 60) 00:30:46.716 14767.064 - 14881.537: 9.6154% ( 39) 00:30:46.716 14881.537 - 14996.010: 10.0446% ( 25) 00:30:46.716 14996.010 - 15110.484: 10.3365% ( 17) 00:30:46.716 15110.484 - 15224.957: 10.5941% ( 15) 00:30:46.716 15224.957 - 15339.431: 10.9547% ( 21) 00:30:46.716 15339.431 - 15453.904: 11.4183% ( 27) 00:30:46.716 15453.904 - 15568.377: 11.9162% ( 29) 00:30:46.716 15568.377 - 15682.851: 12.4657% ( 32) 00:30:46.716 15682.851 - 15797.324: 13.0151% ( 32) 00:30:46.716 15797.324 - 15911.797: 13.5989% ( 34) 00:30:46.716 15911.797 - 16026.271: 14.0453% ( 26) 00:30:46.716 16026.271 - 16140.744: 14.7493% ( 41) 00:30:46.716 16140.744 - 16255.217: 15.2301% ( 28) 00:30:46.716 16255.217 - 16369.691: 15.6937% ( 27) 00:30:46.716 16369.691 - 16484.164: 16.1229% ( 25) 00:30:46.716 16484.164 - 16598.638: 16.6380% ( 30) 00:30:46.716 16598.638 - 16713.111: 17.4451% ( 47) 00:30:46.716 16713.111 - 16827.584: 18.4066% ( 56) 00:30:46.716 16827.584 - 16942.058: 19.9004% ( 87) 00:30:46.716 16942.058 - 17056.531: 21.0337% ( 66) 00:30:46.716 17056.531 - 17171.004: 21.8750% ( 49) 00:30:46.716 17171.004 - 17285.478: 22.7507% ( 51) 00:30:46.716 17285.478 - 17399.951: 23.3688% ( 36) 00:30:46.716 17399.951 - 17514.424: 23.8839% ( 30) 00:30:46.716 17514.424 - 17628.898: 24.2788% ( 23) 00:30:46.716 17628.898 - 17743.371: 24.6738% ( 23) 00:30:46.716 17743.371 - 17857.845: 25.2404% ( 33) 00:30:46.716 17857.845 - 17972.318: 25.8413% ( 35) 00:30:46.716 17972.318 - 18086.791: 26.3565% ( 30) 00:30:46.716 18086.791 - 18201.265: 27.0089% ( 38) 00:30:46.716 18201.265 - 18315.738: 27.6786% ( 39) 00:30:46.716 18315.738 - 18430.211: 28.0735% ( 23) 00:30:46.716 18430.211 - 18544.685: 28.4512% ( 22) 00:30:46.716 18544.685 - 18659.158: 28.9148% ( 27) 00:30:46.716 18659.158 - 18773.631: 29.2926% ( 22) 00:30:46.716 18773.631 - 18888.105: 29.8420% ( 32) 00:30:46.716 18888.105 - 19002.578: 30.4430% ( 35) 00:30:46.716 19002.578 - 19117.052: 30.8207% ( 22) 00:30:46.716 19117.052 - 19231.525: 31.2843% ( 27) 00:30:46.716 19231.525 - 19345.998: 31.7651% ( 28) 00:30:46.716 19345.998 - 19460.472: 32.1944% ( 25) 00:30:46.716 19460.472 - 19574.945: 32.4691% ( 16) 00:30:46.716 19574.945 - 19689.418: 32.7438% ( 16) 00:30:46.716 19689.418 - 19803.892: 33.0185% ( 16) 00:30:46.716 19803.892 - 19918.365: 33.2933% ( 16) 00:30:46.716 19918.365 - 20032.838: 33.4993% ( 12) 00:30:46.716 20032.838 - 20147.312: 33.6882% ( 11) 00:30:46.716 20147.312 - 20261.785: 33.8427% ( 9) 00:30:46.716 20261.785 - 20376.259: 34.0144% ( 10) 00:30:46.716 20376.259 - 20490.732: 34.1003% ( 5) 00:30:46.716 20490.732 - 20605.205: 34.2720% ( 10) 00:30:46.716 20605.205 - 20719.679: 34.4609% ( 11) 00:30:46.717 20719.679 - 20834.152: 34.6669% ( 12) 00:30:46.717 20834.152 - 20948.625: 34.8558% ( 11) 00:30:46.717 20948.625 - 21063.099: 34.9931% ( 8) 00:30:46.717 21063.099 - 21177.572: 35.1820% ( 11) 00:30:46.717 21177.572 - 21292.045: 35.3194% ( 8) 00:30:46.717 21292.045 - 21406.519: 35.4052% ( 5) 00:30:46.717 21406.519 - 21520.992: 35.5769% ( 10) 00:30:46.717 21520.992 - 21635.466: 35.9032% ( 19) 00:30:46.717 21635.466 - 21749.939: 36.0577% ( 9) 00:30:46.717 21749.939 - 21864.412: 36.0749% ( 1) 00:30:46.717 21864.412 - 21978.886: 36.1264% ( 3) 00:30:46.717 21978.886 - 22093.359: 36.2122% ( 5) 00:30:46.717 22093.359 - 22207.832: 36.2981% ( 5) 00:30:46.717 22207.832 - 22322.306: 36.4698% ( 10) 00:30:46.717 22322.306 - 22436.779: 36.6243% ( 9) 00:30:46.717 22436.779 - 22551.252: 36.7445% ( 7) 00:30:46.717 22551.252 - 22665.726: 36.9505% ( 12) 00:30:46.717 22665.726 - 22780.199: 37.1738% ( 13) 00:30:46.717 22780.199 - 22894.672: 37.6889% ( 30) 00:30:46.717 22894.672 - 23009.146: 38.3585% ( 39) 00:30:46.717 23009.146 - 23123.619: 38.9251% ( 33) 00:30:46.717 23123.619 - 23238.093: 39.5604% ( 37) 00:30:46.717 23238.093 - 23352.566: 40.5735% ( 59) 00:30:46.717 23352.566 - 23467.039: 41.8613% ( 75) 00:30:46.717 23467.039 - 23581.513: 43.5440% ( 98) 00:30:46.717 23581.513 - 23695.986: 45.5014% ( 114) 00:30:46.717 23695.986 - 23810.459: 47.6992% ( 128) 00:30:46.717 23810.459 - 23924.933: 49.9485% ( 131) 00:30:46.717 23924.933 - 24039.406: 52.3695% ( 141) 00:30:46.717 24039.406 - 24153.879: 54.7905% ( 141) 00:30:46.717 24153.879 - 24268.353: 56.9196% ( 124) 00:30:46.717 24268.353 - 24382.826: 59.1518% ( 130) 00:30:46.717 24382.826 - 24497.300: 61.4011% ( 131) 00:30:46.717 24497.300 - 24611.773: 63.7363% ( 136) 00:30:46.717 24611.773 - 24726.246: 66.1229% ( 139) 00:30:46.717 24726.246 - 24840.720: 69.0076% ( 168) 00:30:46.717 24840.720 - 24955.193: 71.6174% ( 152) 00:30:46.717 24955.193 - 25069.666: 73.9870% ( 138) 00:30:46.717 25069.666 - 25184.140: 75.9272% ( 113) 00:30:46.717 25184.140 - 25298.613: 77.7129% ( 104) 00:30:46.717 25298.613 - 25413.086: 79.5330% ( 106) 00:30:46.717 25413.086 - 25527.560: 81.1813% ( 96) 00:30:46.717 25527.560 - 25642.033: 82.7953% ( 94) 00:30:46.717 25642.033 - 25756.507: 84.3235% ( 89) 00:30:46.717 25756.507 - 25870.980: 85.9890% ( 97) 00:30:46.717 25870.980 - 25985.453: 87.6374% ( 96) 00:30:46.717 25985.453 - 26099.927: 89.2170% ( 92) 00:30:46.717 26099.927 - 26214.400: 90.6593% ( 84) 00:30:46.717 26214.400 - 26328.873: 91.9986% ( 78) 00:30:46.717 26328.873 - 26443.347: 93.3723% ( 80) 00:30:46.717 26443.347 - 26557.820: 94.5742% ( 70) 00:30:46.717 26557.820 - 26672.293: 95.6559% ( 63) 00:30:46.717 26672.293 - 26786.767: 96.4457% ( 46) 00:30:46.717 26786.767 - 26901.240: 97.0639% ( 36) 00:30:46.717 26901.240 - 27015.714: 97.5275% ( 27) 00:30:46.717 27015.714 - 27130.187: 97.6648% ( 8) 00:30:46.717 27130.187 - 27244.660: 97.7163% ( 3) 00:30:46.717 27244.660 - 27359.134: 97.7507% ( 2) 00:30:46.717 27359.134 - 27473.607: 97.7850% ( 2) 00:30:46.717 27473.607 - 27588.080: 97.8022% ( 1) 00:30:46.717 28961.761 - 29076.234: 97.8365% ( 2) 00:30:46.717 29076.234 - 29190.707: 97.8709% ( 2) 00:30:46.717 29190.707 - 29305.181: 97.9224% ( 3) 00:30:46.717 29305.181 - 29534.128: 97.9911% ( 4) 00:30:46.717 29534.128 - 29763.074: 98.0598% ( 4) 00:30:46.717 29763.074 - 29992.021: 98.1456% ( 5) 00:30:46.717 29992.021 - 30220.968: 98.2143% ( 4) 00:30:46.717 30220.968 - 30449.914: 98.2830% ( 4) 00:30:46.717 30449.914 - 30678.861: 98.3688% ( 5) 00:30:46.717 30678.861 - 30907.808: 98.4375% ( 4) 00:30:46.717 30907.808 - 31136.755: 98.5234% ( 5) 00:30:46.717 31136.755 - 31365.701: 98.6092% ( 5) 00:30:46.717 31365.701 - 31594.648: 98.6951% ( 5) 00:30:46.717 31594.648 - 31823.595: 98.7637% ( 4) 00:30:46.717 31823.595 - 32052.541: 98.8324% ( 4) 00:30:46.717 32052.541 - 32281.488: 98.9011% ( 4) 00:30:46.717 40294.624 - 40523.570: 98.9354% ( 2) 00:30:46.717 40523.570 - 40752.517: 99.0213% ( 5) 00:30:46.717 40752.517 - 40981.464: 99.1071% ( 5) 00:30:46.717 40981.464 - 41210.410: 99.1930% ( 5) 00:30:46.717 41210.410 - 41439.357: 99.2788% ( 5) 00:30:46.717 41439.357 - 41668.304: 99.3647% ( 5) 00:30:46.717 41668.304 - 41897.251: 99.4334% ( 4) 00:30:46.717 41897.251 - 42126.197: 99.5021% ( 4) 00:30:46.717 42126.197 - 42355.144: 99.5879% ( 5) 00:30:46.717 42355.144 - 42584.091: 99.6738% ( 5) 00:30:46.717 42584.091 - 42813.038: 99.7596% ( 5) 00:30:46.717 42813.038 - 43041.984: 99.8455% ( 5) 00:30:46.717 43041.984 - 43270.931: 99.9313% ( 5) 00:30:46.717 43270.931 - 43499.878: 100.0000% ( 4) 00:30:46.717 00:30:46.717 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:46.717 ============================================================================== 00:30:46.717 Range in us Cumulative IO count 00:30:46.717 13107.200 - 13164.437: 0.0340% ( 2) 00:30:46.717 13164.437 - 13221.673: 0.0679% ( 2) 00:30:46.717 13221.673 - 13278.910: 0.1189% ( 3) 00:30:46.717 13278.910 - 13336.147: 0.1698% ( 3) 00:30:46.717 13336.147 - 13393.383: 0.2208% ( 3) 00:30:46.717 13393.383 - 13450.620: 0.3057% ( 5) 00:30:46.717 13450.620 - 13507.857: 0.4076% ( 6) 00:30:46.717 13507.857 - 13565.093: 0.6454% ( 14) 00:30:46.717 13565.093 - 13622.330: 0.8322% ( 11) 00:30:46.717 13622.330 - 13679.567: 1.0020% ( 10) 00:30:46.717 13679.567 - 13736.803: 1.2228% ( 13) 00:30:46.717 13736.803 - 13794.040: 1.4096% ( 11) 00:30:46.717 13794.040 - 13851.277: 1.5625% ( 9) 00:30:46.717 13851.277 - 13908.514: 1.7833% ( 13) 00:30:46.717 13908.514 - 13965.750: 2.0211% ( 14) 00:30:46.717 13965.750 - 14022.987: 2.7174% ( 41) 00:30:46.717 14022.987 - 14080.224: 3.1760% ( 27) 00:30:46.717 14080.224 - 14137.460: 3.5156% ( 20) 00:30:46.717 14137.460 - 14194.697: 3.8213% ( 18) 00:30:46.717 14194.697 - 14251.934: 4.1101% ( 17) 00:30:46.717 14251.934 - 14309.170: 4.5856% ( 28) 00:30:46.717 14309.170 - 14366.407: 4.9423% ( 21) 00:30:46.717 14366.407 - 14423.644: 5.3159% ( 22) 00:30:46.717 14423.644 - 14480.880: 5.6046% ( 17) 00:30:46.717 14480.880 - 14538.117: 5.8933% ( 17) 00:30:46.717 14538.117 - 14595.354: 6.1990% ( 18) 00:30:46.717 14595.354 - 14652.590: 6.5048% ( 18) 00:30:46.717 14652.590 - 14767.064: 7.2351% ( 43) 00:30:46.717 14767.064 - 14881.537: 8.5428% ( 77) 00:30:46.717 14881.537 - 14996.010: 9.5109% ( 57) 00:30:46.717 14996.010 - 15110.484: 10.3770% ( 51) 00:30:46.717 15110.484 - 15224.957: 11.0564% ( 40) 00:30:46.717 15224.957 - 15339.431: 11.5659% ( 30) 00:30:46.717 15339.431 - 15453.904: 12.0075% ( 26) 00:30:46.717 15453.904 - 15568.377: 12.2792% ( 16) 00:30:46.717 15568.377 - 15682.851: 12.6359% ( 21) 00:30:46.717 15682.851 - 15797.324: 12.9246% ( 17) 00:30:46.717 15797.324 - 15911.797: 13.1624% ( 14) 00:30:46.717 15911.797 - 16026.271: 13.6209% ( 27) 00:30:46.717 16026.271 - 16140.744: 14.2663% ( 38) 00:30:46.717 16140.744 - 16255.217: 14.9966% ( 43) 00:30:46.717 16255.217 - 16369.691: 15.8458% ( 50) 00:30:46.717 16369.691 - 16484.164: 16.7969% ( 56) 00:30:46.717 16484.164 - 16598.638: 17.7310% ( 55) 00:30:46.717 16598.638 - 16713.111: 18.4273% ( 41) 00:30:46.717 16713.111 - 16827.584: 19.0048% ( 34) 00:30:46.717 16827.584 - 16942.058: 19.4803% ( 28) 00:30:46.717 16942.058 - 17056.531: 20.0917% ( 36) 00:30:46.717 17056.531 - 17171.004: 20.6692% ( 34) 00:30:46.717 17171.004 - 17285.478: 21.2466% ( 34) 00:30:46.717 17285.478 - 17399.951: 21.8410% ( 35) 00:30:46.717 17399.951 - 17514.424: 22.4524% ( 36) 00:30:46.717 17514.424 - 17628.898: 23.2677% ( 48) 00:30:46.717 17628.898 - 17743.371: 24.0319% ( 45) 00:30:46.717 17743.371 - 17857.845: 24.7792% ( 44) 00:30:46.717 17857.845 - 17972.318: 25.7812% ( 59) 00:30:46.717 17972.318 - 18086.791: 26.5455% ( 45) 00:30:46.717 18086.791 - 18201.265: 27.2418% ( 41) 00:30:46.717 18201.265 - 18315.738: 27.6495% ( 24) 00:30:46.717 18315.738 - 18430.211: 28.1080% ( 27) 00:30:46.717 18430.211 - 18544.685: 28.6855% ( 34) 00:30:46.717 18544.685 - 18659.158: 29.3648% ( 40) 00:30:46.717 18659.158 - 18773.631: 30.1121% ( 44) 00:30:46.717 18773.631 - 18888.105: 30.6046% ( 29) 00:30:46.717 18888.105 - 19002.578: 31.1141% ( 30) 00:30:46.717 19002.578 - 19117.052: 31.6746% ( 33) 00:30:46.717 19117.052 - 19231.525: 32.2181% ( 32) 00:30:46.717 19231.525 - 19345.998: 32.7276% ( 30) 00:30:46.717 19345.998 - 19460.472: 33.1182% ( 23) 00:30:46.717 19460.472 - 19574.945: 33.3050% ( 11) 00:30:46.717 19574.945 - 19689.418: 33.5088% ( 12) 00:30:46.717 19689.418 - 19803.892: 33.7466% ( 14) 00:30:46.717 19803.892 - 19918.365: 33.9844% ( 14) 00:30:46.717 19918.365 - 20032.838: 34.2391% ( 15) 00:30:46.717 20032.838 - 20147.312: 34.5279% ( 17) 00:30:46.717 20147.312 - 20261.785: 34.7486% ( 13) 00:30:46.717 20261.785 - 20376.259: 34.9694% ( 13) 00:30:46.718 20376.259 - 20490.732: 35.1902% ( 13) 00:30:46.718 20490.732 - 20605.205: 35.3940% ( 12) 00:30:46.718 20605.205 - 20719.679: 35.5469% ( 9) 00:30:46.718 20719.679 - 20834.152: 35.7167% ( 10) 00:30:46.718 20834.152 - 20948.625: 35.8356% ( 7) 00:30:46.718 20948.625 - 21063.099: 35.9885% ( 9) 00:30:46.718 21063.099 - 21177.572: 36.0904% ( 6) 00:30:46.718 21177.572 - 21292.045: 36.2602% ( 10) 00:30:46.718 21292.045 - 21406.519: 36.4300% ( 10) 00:30:46.718 21406.519 - 21520.992: 36.5999% ( 10) 00:30:46.718 21520.992 - 21635.466: 36.7357% ( 8) 00:30:46.718 21635.466 - 21749.939: 36.9056% ( 10) 00:30:46.718 21749.939 - 21864.412: 37.0245% ( 7) 00:30:46.718 21864.412 - 21978.886: 37.1433% ( 7) 00:30:46.718 21978.886 - 22093.359: 37.4321% ( 17) 00:30:46.718 22093.359 - 22207.832: 38.1454% ( 42) 00:30:46.718 22207.832 - 22322.306: 38.4001% ( 15) 00:30:46.718 22322.306 - 22436.779: 38.6379% ( 14) 00:30:46.718 22436.779 - 22551.252: 38.8757% ( 14) 00:30:46.718 22551.252 - 22665.726: 39.1814% ( 18) 00:30:46.718 22665.726 - 22780.199: 39.6569% ( 28) 00:30:46.718 22780.199 - 22894.672: 40.1325% ( 28) 00:30:46.718 22894.672 - 23009.146: 40.5910% ( 27) 00:30:46.718 23009.146 - 23123.619: 41.1345% ( 32) 00:30:46.718 23123.619 - 23238.093: 41.9837% ( 50) 00:30:46.718 23238.093 - 23352.566: 43.1895% ( 71) 00:30:46.718 23352.566 - 23467.039: 44.4293% ( 73) 00:30:46.718 23467.039 - 23581.513: 46.1107% ( 99) 00:30:46.718 23581.513 - 23695.986: 48.0978% ( 117) 00:30:46.718 23695.986 - 23810.459: 50.0849% ( 117) 00:30:46.718 23810.459 - 23924.933: 52.3947% ( 136) 00:30:46.718 23924.933 - 24039.406: 54.6535% ( 133) 00:30:46.718 24039.406 - 24153.879: 56.7595% ( 124) 00:30:46.718 24153.879 - 24268.353: 59.1202% ( 139) 00:30:46.718 24268.353 - 24382.826: 61.2942% ( 128) 00:30:46.718 24382.826 - 24497.300: 63.2982% ( 118) 00:30:46.718 24497.300 - 24611.773: 65.5231% ( 131) 00:30:46.718 24611.773 - 24726.246: 68.0876% ( 151) 00:30:46.718 24726.246 - 24840.720: 70.8220% ( 161) 00:30:46.718 24840.720 - 24955.193: 73.2337% ( 142) 00:30:46.718 24955.193 - 25069.666: 75.5944% ( 139) 00:30:46.718 25069.666 - 25184.140: 77.9721% ( 140) 00:30:46.718 25184.140 - 25298.613: 79.7554% ( 105) 00:30:46.718 25298.613 - 25413.086: 81.7425% ( 117) 00:30:46.718 25413.086 - 25527.560: 83.4069% ( 98) 00:30:46.718 25527.560 - 25642.033: 84.8336% ( 84) 00:30:46.718 25642.033 - 25756.507: 86.3961% ( 92) 00:30:46.718 25756.507 - 25870.980: 87.7717% ( 81) 00:30:46.718 25870.980 - 25985.453: 89.2493% ( 87) 00:30:46.718 25985.453 - 26099.927: 90.8118% ( 92) 00:30:46.718 26099.927 - 26214.400: 92.0686% ( 74) 00:30:46.718 26214.400 - 26328.873: 93.4613% ( 82) 00:30:46.718 26328.873 - 26443.347: 94.7520% ( 76) 00:30:46.718 26443.347 - 26557.820: 95.9239% ( 69) 00:30:46.718 26557.820 - 26672.293: 96.9260% ( 59) 00:30:46.718 26672.293 - 26786.767: 97.7242% ( 47) 00:30:46.718 26786.767 - 26901.240: 98.2167% ( 29) 00:30:46.718 26901.240 - 27015.714: 98.5734% ( 21) 00:30:46.718 27015.714 - 27130.187: 98.7092% ( 8) 00:30:46.718 27130.187 - 27244.660: 98.7262% ( 1) 00:30:46.718 27244.660 - 27359.134: 98.7772% ( 3) 00:30:46.718 27359.134 - 27473.607: 98.8111% ( 2) 00:30:46.718 27473.607 - 27588.080: 98.8451% ( 2) 00:30:46.718 27588.080 - 27702.554: 98.8791% ( 2) 00:30:46.718 27702.554 - 27817.027: 98.9130% ( 2) 00:30:46.718 28961.761 - 29076.234: 98.9300% ( 1) 00:30:46.718 29076.234 - 29190.707: 98.9640% ( 2) 00:30:46.718 29190.707 - 29305.181: 98.9980% ( 2) 00:30:46.718 29305.181 - 29534.128: 99.0829% ( 5) 00:30:46.718 29534.128 - 29763.074: 99.1678% ( 5) 00:30:46.718 29763.074 - 29992.021: 99.2357% ( 4) 00:30:46.718 29992.021 - 30220.968: 99.3207% ( 5) 00:30:46.718 30220.968 - 30449.914: 99.4056% ( 5) 00:30:46.718 30449.914 - 30678.861: 99.4735% ( 4) 00:30:46.718 30678.861 - 30907.808: 99.5584% ( 5) 00:30:46.718 30907.808 - 31136.755: 99.6264% ( 4) 00:30:46.718 31136.755 - 31365.701: 99.7113% ( 5) 00:30:46.718 31365.701 - 31594.648: 99.7792% ( 4) 00:30:46.718 31594.648 - 31823.595: 99.8641% ( 5) 00:30:46.718 31823.595 - 32052.541: 99.9321% ( 4) 00:30:46.718 32052.541 - 32281.488: 100.0000% ( 4) 00:30:46.718 00:30:46.718 17:27:23 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:46.718 00:30:46.718 real 0m2.736s 00:30:46.718 user 0m2.304s 00:30:46.718 sys 0m0.322s 00:30:46.718 17:27:23 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.718 17:27:23 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:30:46.718 ************************************ 00:30:46.718 END TEST nvme_perf 00:30:46.718 ************************************ 00:30:46.718 17:27:24 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:46.718 17:27:24 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:46.718 17:27:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.718 17:27:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:46.718 ************************************ 00:30:46.718 START TEST nvme_hello_world 00:30:46.718 ************************************ 00:30:46.718 17:27:24 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:46.984 Initializing NVMe Controllers 00:30:46.984 Attached to 0000:00:10.0 00:30:46.984 Namespace ID: 1 size: 6GB 00:30:46.984 Attached to 0000:00:11.0 00:30:46.984 Namespace ID: 1 size: 5GB 00:30:46.984 Attached to 0000:00:13.0 00:30:46.984 Namespace ID: 1 size: 1GB 00:30:46.984 Attached to 0000:00:12.0 00:30:46.984 Namespace ID: 1 size: 4GB 00:30:46.984 Namespace ID: 2 size: 4GB 00:30:46.984 Namespace ID: 3 size: 4GB 00:30:46.984 Initialization complete. 00:30:46.984 INFO: using host memory buffer for IO 00:30:46.984 Hello world! 00:30:46.984 INFO: using host memory buffer for IO 00:30:46.984 Hello world! 00:30:46.984 INFO: using host memory buffer for IO 00:30:46.984 Hello world! 00:30:46.984 INFO: using host memory buffer for IO 00:30:46.984 Hello world! 00:30:46.984 INFO: using host memory buffer for IO 00:30:46.984 Hello world! 00:30:46.984 INFO: using host memory buffer for IO 00:30:46.984 Hello world! 00:30:46.984 00:30:46.984 real 0m0.362s 00:30:46.984 user 0m0.110s 00:30:46.984 sys 0m0.203s 00:30:46.984 17:27:24 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.985 17:27:24 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:46.985 ************************************ 00:30:46.985 END TEST nvme_hello_world 00:30:46.985 ************************************ 00:30:46.985 17:27:24 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:46.985 17:27:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:46.985 17:27:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.985 17:27:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:47.255 ************************************ 00:30:47.255 START TEST nvme_sgl 00:30:47.255 ************************************ 00:30:47.255 17:27:24 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:47.514 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:30:47.514 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:30:47.514 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:30:47.515 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:30:47.515 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:30:47.515 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:30:47.515 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:30:47.515 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:30:47.515 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:30:47.515 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:30:47.515 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:30:47.515 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:30:47.515 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:30:47.515 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:30:47.515 NVMe Readv/Writev Request test 00:30:47.515 Attached to 0000:00:10.0 00:30:47.515 Attached to 0000:00:11.0 00:30:47.515 Attached to 0000:00:13.0 00:30:47.515 Attached to 0000:00:12.0 00:30:47.515 0000:00:10.0: build_io_request_2 test passed 00:30:47.515 0000:00:10.0: build_io_request_4 test passed 00:30:47.515 0000:00:10.0: build_io_request_5 test passed 00:30:47.515 0000:00:10.0: build_io_request_6 test passed 00:30:47.515 0000:00:10.0: build_io_request_7 test passed 00:30:47.515 0000:00:10.0: build_io_request_10 test passed 00:30:47.515 0000:00:11.0: build_io_request_2 test passed 00:30:47.515 0000:00:11.0: build_io_request_4 test passed 00:30:47.515 0000:00:11.0: build_io_request_5 test passed 00:30:47.515 0000:00:11.0: build_io_request_6 test passed 00:30:47.515 0000:00:11.0: build_io_request_7 test passed 00:30:47.515 0000:00:11.0: build_io_request_10 test passed 00:30:47.515 Cleaning up... 00:30:47.515 00:30:47.515 real 0m0.483s 00:30:47.515 user 0m0.286s 00:30:47.515 sys 0m0.150s 00:30:47.515 17:27:24 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:47.515 17:27:24 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:30:47.515 ************************************ 00:30:47.515 END TEST nvme_sgl 00:30:47.515 ************************************ 00:30:47.515 17:27:24 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:47.515 17:27:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:47.515 17:27:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:47.515 17:27:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:47.515 ************************************ 00:30:47.515 START TEST nvme_e2edp 00:30:47.515 ************************************ 00:30:47.515 17:27:24 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:48.083 NVMe Write/Read with End-to-End data protection test 00:30:48.083 Attached to 0000:00:10.0 00:30:48.083 Attached to 0000:00:11.0 00:30:48.083 Attached to 0000:00:13.0 00:30:48.083 Attached to 0000:00:12.0 00:30:48.083 Cleaning up... 00:30:48.083 00:30:48.083 real 0m0.296s 00:30:48.083 user 0m0.110s 00:30:48.083 sys 0m0.139s 00:30:48.083 17:27:25 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.083 17:27:25 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:30:48.083 ************************************ 00:30:48.083 END TEST nvme_e2edp 00:30:48.083 ************************************ 00:30:48.083 17:27:25 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:48.083 17:27:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.083 17:27:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.083 17:27:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:48.083 ************************************ 00:30:48.083 START TEST nvme_reserve 00:30:48.083 ************************************ 00:30:48.083 17:27:25 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:48.341 ===================================================== 00:30:48.341 NVMe Controller at PCI bus 0, device 16, function 0 00:30:48.341 ===================================================== 00:30:48.341 Reservations: Not Supported 00:30:48.341 ===================================================== 00:30:48.341 NVMe Controller at PCI bus 0, device 17, function 0 00:30:48.341 ===================================================== 00:30:48.341 Reservations: Not Supported 00:30:48.341 ===================================================== 00:30:48.341 NVMe Controller at PCI bus 0, device 19, function 0 00:30:48.341 ===================================================== 00:30:48.341 Reservations: Not Supported 00:30:48.341 ===================================================== 00:30:48.341 NVMe Controller at PCI bus 0, device 18, function 0 00:30:48.341 ===================================================== 00:30:48.341 Reservations: Not Supported 00:30:48.341 Reservation test passed 00:30:48.341 00:30:48.341 real 0m0.307s 00:30:48.341 user 0m0.103s 00:30:48.341 sys 0m0.155s 00:30:48.341 17:27:25 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.341 17:27:25 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:30:48.341 ************************************ 00:30:48.341 END TEST nvme_reserve 00:30:48.341 ************************************ 00:30:48.341 17:27:25 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:48.341 17:27:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.341 17:27:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.341 17:27:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:48.341 ************************************ 00:30:48.341 START TEST nvme_err_injection 00:30:48.341 ************************************ 00:30:48.341 17:27:25 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:48.599 NVMe Error Injection test 00:30:48.600 Attached to 0000:00:10.0 00:30:48.600 Attached to 0000:00:11.0 00:30:48.600 Attached to 0000:00:13.0 00:30:48.600 Attached to 0000:00:12.0 00:30:48.600 0000:00:10.0: get features failed as expected 00:30:48.600 0000:00:11.0: get features failed as expected 00:30:48.600 0000:00:13.0: get features failed as expected 00:30:48.600 0000:00:12.0: get features failed as expected 00:30:48.600 0000:00:10.0: get features successfully as expected 00:30:48.600 0000:00:11.0: get features successfully as expected 00:30:48.600 0000:00:13.0: get features successfully as expected 00:30:48.600 0000:00:12.0: get features successfully as expected 00:30:48.600 0000:00:10.0: read failed as expected 00:30:48.600 0000:00:11.0: read failed as expected 00:30:48.600 0000:00:13.0: read failed as expected 00:30:48.600 0000:00:12.0: read failed as expected 00:30:48.600 0000:00:10.0: read successfully as expected 00:30:48.600 0000:00:11.0: read successfully as expected 00:30:48.600 0000:00:13.0: read successfully as expected 00:30:48.600 0000:00:12.0: read successfully as expected 00:30:48.600 Cleaning up... 00:30:48.600 00:30:48.600 real 0m0.298s 00:30:48.600 user 0m0.105s 00:30:48.600 sys 0m0.153s 00:30:48.600 17:27:25 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.600 17:27:25 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:30:48.600 ************************************ 00:30:48.600 END TEST nvme_err_injection 00:30:48.600 ************************************ 00:30:48.600 17:27:25 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:48.600 17:27:25 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:30:48.600 17:27:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.600 17:27:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:48.600 ************************************ 00:30:48.600 START TEST nvme_overhead 00:30:48.600 ************************************ 00:30:48.600 17:27:26 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:49.975 Initializing NVMe Controllers 00:30:49.975 Attached to 0000:00:10.0 00:30:49.975 Attached to 0000:00:11.0 00:30:49.975 Attached to 0000:00:13.0 00:30:49.975 Attached to 0000:00:12.0 00:30:49.975 Initialization complete. Launching workers. 00:30:49.975 submit (in ns) avg, min, max = 13333.4, 10795.6, 54406.1 00:30:49.975 complete (in ns) avg, min, max = 7995.3, 6507.4, 1441997.4 00:30:49.975 00:30:49.975 Submit histogram 00:30:49.975 ================ 00:30:49.975 Range in us Cumulative Count 00:30:49.975 10.788 - 10.844: 0.0067% ( 1) 00:30:49.975 11.179 - 11.235: 0.0133% ( 1) 00:30:49.975 11.291 - 11.347: 0.0200% ( 1) 00:30:49.975 11.347 - 11.403: 0.0733% ( 8) 00:30:49.975 11.403 - 11.459: 0.1599% ( 13) 00:30:49.975 11.459 - 11.514: 0.3531% ( 29) 00:30:49.975 11.514 - 11.570: 0.4930% ( 21) 00:30:49.975 11.570 - 11.626: 0.9260% ( 65) 00:30:49.975 11.626 - 11.682: 1.4324% ( 76) 00:30:49.975 11.682 - 11.738: 2.1319% ( 105) 00:30:49.975 11.738 - 11.794: 3.0913% ( 144) 00:30:49.975 11.794 - 11.850: 4.4770% ( 208) 00:30:49.975 11.850 - 11.906: 6.0626% ( 238) 00:30:49.975 11.906 - 11.962: 7.8548% ( 269) 00:30:49.975 11.962 - 12.017: 9.9267% ( 311) 00:30:49.975 12.017 - 12.073: 12.3651% ( 366) 00:30:49.975 12.073 - 12.129: 15.0966% ( 410) 00:30:49.975 12.129 - 12.185: 17.9747% ( 432) 00:30:49.975 12.185 - 12.241: 21.0460% ( 461) 00:30:49.975 12.241 - 12.297: 24.4237% ( 507) 00:30:49.975 12.297 - 12.353: 27.7282% ( 496) 00:30:49.975 12.353 - 12.409: 30.9660% ( 486) 00:30:49.975 12.409 - 12.465: 34.0640% ( 465) 00:30:49.975 12.465 - 12.521: 37.3884% ( 499) 00:30:49.975 12.521 - 12.576: 40.6929% ( 496) 00:30:49.975 12.576 - 12.632: 43.9973% ( 496) 00:30:49.975 12.632 - 12.688: 47.2818% ( 493) 00:30:49.975 12.688 - 12.744: 50.1865% ( 436) 00:30:49.975 12.744 - 12.800: 53.0779% ( 434) 00:30:49.975 12.800 - 12.856: 55.9627% ( 433) 00:30:49.975 12.856 - 12.912: 58.3211% ( 354) 00:30:49.975 12.912 - 12.968: 60.8061% ( 373) 00:30:49.975 12.968 - 13.024: 62.9980% ( 329) 00:30:49.975 13.024 - 13.079: 65.1233% ( 319) 00:30:49.975 13.079 - 13.135: 67.1153% ( 299) 00:30:49.975 13.135 - 13.191: 68.6609% ( 232) 00:30:49.975 13.191 - 13.247: 70.1399% ( 222) 00:30:49.975 13.247 - 13.303: 71.5723% ( 215) 00:30:49.975 13.303 - 13.359: 72.8181% ( 187) 00:30:49.975 13.359 - 13.415: 74.0706% ( 188) 00:30:49.975 13.415 - 13.471: 75.1166% ( 157) 00:30:49.975 13.471 - 13.527: 76.1759% ( 159) 00:30:49.975 13.527 - 13.583: 77.1486% ( 146) 00:30:49.975 13.583 - 13.638: 77.9214% ( 116) 00:30:49.975 13.638 - 13.694: 78.8075% ( 133) 00:30:49.975 13.694 - 13.750: 79.4404% ( 95) 00:30:49.975 13.750 - 13.806: 80.1466% ( 106) 00:30:49.975 13.806 - 13.862: 80.8461% ( 105) 00:30:49.975 13.862 - 13.918: 81.4191% ( 86) 00:30:49.975 13.918 - 13.974: 81.8055% ( 58) 00:30:49.975 13.974 - 14.030: 82.2718% ( 70) 00:30:49.975 14.030 - 14.086: 82.6582% ( 58) 00:30:49.975 14.086 - 14.141: 83.1446% ( 73) 00:30:49.975 14.141 - 14.197: 83.4444% ( 45) 00:30:49.975 14.197 - 14.253: 83.7442% ( 45) 00:30:49.975 14.253 - 14.309: 84.0573% ( 47) 00:30:49.975 14.309 - 14.421: 84.5103% ( 68) 00:30:49.975 14.421 - 14.533: 84.8834% ( 56) 00:30:49.975 14.533 - 14.645: 85.2232% ( 51) 00:30:49.975 14.645 - 14.756: 85.6229% ( 60) 00:30:49.975 14.756 - 14.868: 86.1093% ( 73) 00:30:49.975 14.868 - 14.980: 86.8155% ( 106) 00:30:49.975 14.980 - 15.092: 87.6282% ( 122) 00:30:49.975 15.092 - 15.203: 88.4877% ( 129) 00:30:49.975 15.203 - 15.315: 89.3071% ( 123) 00:30:49.975 15.315 - 15.427: 90.0866% ( 117) 00:30:49.975 15.427 - 15.539: 90.7528% ( 100) 00:30:49.975 15.539 - 15.651: 91.2725% ( 78) 00:30:49.975 15.651 - 15.762: 91.7055% ( 65) 00:30:49.975 15.762 - 15.874: 92.0853% ( 57) 00:30:49.975 15.874 - 15.986: 92.3718% ( 43) 00:30:49.975 15.986 - 16.098: 92.6915% ( 48) 00:30:49.975 16.098 - 16.210: 92.9714% ( 42) 00:30:49.975 16.210 - 16.321: 93.2911% ( 48) 00:30:49.975 16.321 - 16.433: 93.6709% ( 57) 00:30:49.975 16.433 - 16.545: 93.9840% ( 47) 00:30:49.975 16.545 - 16.657: 94.2438% ( 39) 00:30:49.975 16.657 - 16.769: 94.4970% ( 38) 00:30:49.975 16.769 - 16.880: 94.7302% ( 35) 00:30:49.975 16.880 - 16.992: 94.9767% ( 37) 00:30:49.975 16.992 - 17.104: 95.2099% ( 35) 00:30:49.975 17.104 - 17.216: 95.4031% ( 29) 00:30:49.975 17.216 - 17.328: 95.5963% ( 29) 00:30:49.975 17.328 - 17.439: 95.7695% ( 26) 00:30:49.975 17.439 - 17.551: 95.9494% ( 27) 00:30:49.975 17.551 - 17.663: 96.1492% ( 30) 00:30:49.975 17.663 - 17.775: 96.3091% ( 24) 00:30:49.975 17.775 - 17.886: 96.4957% ( 28) 00:30:49.975 17.886 - 17.998: 96.6689% ( 26) 00:30:49.975 17.998 - 18.110: 96.8021% ( 20) 00:30:49.975 18.110 - 18.222: 96.9087% ( 16) 00:30:49.975 18.222 - 18.334: 97.0486% ( 21) 00:30:49.975 18.334 - 18.445: 97.1686% ( 18) 00:30:49.975 18.445 - 18.557: 97.2685% ( 15) 00:30:49.975 18.557 - 18.669: 97.3951% ( 19) 00:30:49.975 18.669 - 18.781: 97.5150% ( 18) 00:30:49.975 18.781 - 18.893: 97.6083% ( 14) 00:30:49.975 18.893 - 19.004: 97.6882% ( 12) 00:30:49.975 19.004 - 19.116: 97.7881% ( 15) 00:30:49.975 19.116 - 19.228: 97.8881% ( 15) 00:30:49.975 19.228 - 19.340: 97.9480% ( 9) 00:30:49.975 19.340 - 19.452: 97.9880% ( 6) 00:30:49.975 19.452 - 19.563: 98.0546% ( 10) 00:30:49.975 19.563 - 19.675: 98.2079% ( 23) 00:30:49.975 19.675 - 19.787: 98.2412% ( 5) 00:30:49.975 19.787 - 19.899: 98.3078% ( 10) 00:30:49.975 19.899 - 20.010: 98.3877% ( 12) 00:30:49.975 20.010 - 20.122: 98.4610% ( 11) 00:30:49.975 20.122 - 20.234: 98.5143% ( 8) 00:30:49.975 20.234 - 20.346: 98.5610% ( 7) 00:30:49.975 20.346 - 20.458: 98.6209% ( 9) 00:30:49.975 20.458 - 20.569: 98.6742% ( 8) 00:30:49.975 20.569 - 20.681: 98.7342% ( 9) 00:30:49.975 20.681 - 20.793: 98.7808% ( 7) 00:30:49.975 20.793 - 20.905: 98.8408% ( 9) 00:30:49.976 20.905 - 21.017: 98.8874% ( 7) 00:30:49.976 21.017 - 21.128: 98.9340% ( 7) 00:30:49.976 21.128 - 21.240: 98.9740% ( 6) 00:30:49.976 21.240 - 21.352: 99.0140% ( 6) 00:30:49.976 21.352 - 21.464: 99.0473% ( 5) 00:30:49.976 21.464 - 21.576: 99.0740% ( 4) 00:30:49.976 21.576 - 21.687: 99.1006% ( 4) 00:30:49.976 21.687 - 21.799: 99.1606% ( 9) 00:30:49.976 21.799 - 21.911: 99.1939% ( 5) 00:30:49.976 21.911 - 22.023: 99.2139% ( 3) 00:30:49.976 22.023 - 22.134: 99.2405% ( 4) 00:30:49.976 22.134 - 22.246: 99.2871% ( 7) 00:30:49.976 22.246 - 22.358: 99.3205% ( 5) 00:30:49.976 22.358 - 22.470: 99.3338% ( 2) 00:30:49.976 22.470 - 22.582: 99.3538% ( 3) 00:30:49.976 22.582 - 22.693: 99.3671% ( 2) 00:30:49.976 22.693 - 22.805: 99.4004% ( 5) 00:30:49.976 22.805 - 22.917: 99.4270% ( 4) 00:30:49.976 22.917 - 23.029: 99.4470% ( 3) 00:30:49.976 23.141 - 23.252: 99.4670% ( 3) 00:30:49.976 23.252 - 23.364: 99.4870% ( 3) 00:30:49.976 23.476 - 23.588: 99.5070% ( 3) 00:30:49.976 23.588 - 23.700: 99.5203% ( 2) 00:30:49.976 23.700 - 23.811: 99.5470% ( 4) 00:30:49.976 23.811 - 23.923: 99.5869% ( 6) 00:30:49.976 23.923 - 24.035: 99.6136% ( 4) 00:30:49.976 24.259 - 24.370: 99.6203% ( 1) 00:30:49.976 24.370 - 24.482: 99.6402% ( 3) 00:30:49.976 24.482 - 24.594: 99.6536% ( 2) 00:30:49.976 24.594 - 24.706: 99.6602% ( 1) 00:30:49.976 24.706 - 24.817: 99.6736% ( 2) 00:30:49.976 24.817 - 24.929: 99.6869% ( 2) 00:30:49.976 24.929 - 25.041: 99.6935% ( 1) 00:30:49.976 25.041 - 25.153: 99.7202% ( 4) 00:30:49.976 25.153 - 25.265: 99.7268% ( 1) 00:30:49.976 25.265 - 25.376: 99.7335% ( 1) 00:30:49.976 25.488 - 25.600: 99.7402% ( 1) 00:30:49.976 25.600 - 25.712: 99.7468% ( 1) 00:30:49.976 25.824 - 25.935: 99.7535% ( 1) 00:30:49.976 26.047 - 26.159: 99.7602% ( 1) 00:30:49.976 26.383 - 26.494: 99.7668% ( 1) 00:30:49.976 26.718 - 26.830: 99.7735% ( 1) 00:30:49.976 26.830 - 26.941: 99.7801% ( 1) 00:30:49.976 26.941 - 27.053: 99.7935% ( 2) 00:30:49.976 27.165 - 27.277: 99.8001% ( 1) 00:30:49.976 27.277 - 27.389: 99.8068% ( 1) 00:30:49.976 27.836 - 27.948: 99.8135% ( 1) 00:30:49.976 28.283 - 28.395: 99.8268% ( 2) 00:30:49.976 28.395 - 28.507: 99.8401% ( 2) 00:30:49.976 28.507 - 28.618: 99.8468% ( 1) 00:30:49.976 28.842 - 29.066: 99.8534% ( 1) 00:30:49.976 29.066 - 29.289: 99.8601% ( 1) 00:30:49.976 29.736 - 29.960: 99.8668% ( 1) 00:30:49.976 29.960 - 30.183: 99.8734% ( 1) 00:30:49.976 30.183 - 30.407: 99.8867% ( 2) 00:30:49.976 30.407 - 30.631: 99.8934% ( 1) 00:30:49.976 30.854 - 31.078: 99.9001% ( 1) 00:30:49.976 31.078 - 31.301: 99.9134% ( 2) 00:30:49.976 31.301 - 31.525: 99.9334% ( 3) 00:30:49.976 32.196 - 32.419: 99.9467% ( 2) 00:30:49.976 32.643 - 32.866: 99.9667% ( 3) 00:30:49.976 33.314 - 33.537: 99.9734% ( 1) 00:30:49.976 34.208 - 34.431: 99.9800% ( 1) 00:30:49.976 40.915 - 41.139: 99.9867% ( 1) 00:30:49.976 43.822 - 44.045: 99.9933% ( 1) 00:30:49.976 54.330 - 54.554: 100.0000% ( 1) 00:30:49.976 00:30:49.976 Complete histogram 00:30:49.976 ================== 00:30:49.976 Range in us Cumulative Count 00:30:49.976 6.484 - 6.512: 0.0200% ( 3) 00:30:49.976 6.512 - 6.540: 0.0999% ( 12) 00:30:49.976 6.540 - 6.568: 0.3464% ( 37) 00:30:49.976 6.568 - 6.596: 1.0260% ( 102) 00:30:49.976 6.596 - 6.624: 2.0720% ( 157) 00:30:49.976 6.624 - 6.652: 3.6043% ( 230) 00:30:49.976 6.652 - 6.679: 5.4164% ( 272) 00:30:49.976 6.679 - 6.707: 7.7348% ( 348) 00:30:49.976 6.707 - 6.735: 9.7668% ( 305) 00:30:49.976 6.735 - 6.763: 11.7855% ( 303) 00:30:49.976 6.763 - 6.791: 13.6376% ( 278) 00:30:49.976 6.791 - 6.819: 15.3298% ( 254) 00:30:49.976 6.819 - 6.847: 17.2951% ( 295) 00:30:49.976 6.847 - 6.875: 19.1672% ( 281) 00:30:49.976 6.875 - 6.903: 21.0260% ( 279) 00:30:49.976 6.903 - 6.931: 22.8581% ( 275) 00:30:49.976 6.931 - 6.959: 24.6502% ( 269) 00:30:49.976 6.959 - 6.987: 26.1159% ( 220) 00:30:49.976 6.987 - 7.015: 27.6616% ( 232) 00:30:49.976 7.015 - 7.043: 29.0873% ( 214) 00:30:49.976 7.043 - 7.071: 30.3931% ( 196) 00:30:49.976 7.071 - 7.099: 31.6456% ( 188) 00:30:49.976 7.099 - 7.127: 32.9580% ( 197) 00:30:49.976 7.127 - 7.155: 34.3904% ( 215) 00:30:49.976 7.155 - 7.210: 37.2285% ( 426) 00:30:49.976 7.210 - 7.266: 40.6995% ( 521) 00:30:49.976 7.266 - 7.322: 44.5370% ( 576) 00:30:49.976 7.322 - 7.378: 49.0406% ( 676) 00:30:49.976 7.378 - 7.434: 55.0500% ( 902) 00:30:49.976 7.434 - 7.490: 60.6329% ( 838) 00:30:49.976 7.490 - 7.546: 64.8634% ( 635) 00:30:49.976 7.546 - 7.602: 67.9480% ( 463) 00:30:49.976 7.602 - 7.658: 70.7195% ( 416) 00:30:49.976 7.658 - 7.714: 72.7315% ( 302) 00:30:49.976 7.714 - 7.769: 74.3837% ( 248) 00:30:49.976 7.769 - 7.825: 75.7029% ( 198) 00:30:49.976 7.825 - 7.881: 76.8821% ( 177) 00:30:49.976 7.881 - 7.937: 77.8215% ( 141) 00:30:49.976 7.937 - 7.993: 78.6742% ( 128) 00:30:49.976 7.993 - 8.049: 79.4937% ( 123) 00:30:49.976 8.049 - 8.105: 80.2465% ( 113) 00:30:49.976 8.105 - 8.161: 81.0793% ( 125) 00:30:49.976 8.161 - 8.217: 81.8521% ( 116) 00:30:49.976 8.217 - 8.272: 82.4917% ( 96) 00:30:49.976 8.272 - 8.328: 82.9714% ( 72) 00:30:49.976 8.328 - 8.384: 83.4111% ( 66) 00:30:49.976 8.384 - 8.440: 83.7575% ( 52) 00:30:49.976 8.440 - 8.496: 84.0640% ( 46) 00:30:49.976 8.496 - 8.552: 84.3504% ( 43) 00:30:49.976 8.552 - 8.608: 84.5969% ( 37) 00:30:49.976 8.608 - 8.664: 84.8501% ( 38) 00:30:49.976 8.664 - 8.720: 85.0033% ( 23) 00:30:49.976 8.720 - 8.776: 85.1899% ( 28) 00:30:49.976 8.776 - 8.831: 85.3498% ( 24) 00:30:49.976 8.831 - 8.887: 85.5163% ( 25) 00:30:49.976 8.887 - 8.943: 85.6496% ( 20) 00:30:49.976 8.943 - 8.999: 85.7495% ( 15) 00:30:49.976 8.999 - 9.055: 85.8761% ( 19) 00:30:49.976 9.055 - 9.111: 85.9560% ( 12) 00:30:49.976 9.111 - 9.167: 86.0959% ( 21) 00:30:49.976 9.167 - 9.223: 86.1892% ( 14) 00:30:49.976 9.223 - 9.279: 86.2825% ( 14) 00:30:49.976 9.279 - 9.334: 86.3824% ( 15) 00:30:49.976 9.334 - 9.390: 86.7755% ( 59) 00:30:49.976 9.390 - 9.446: 87.7615% ( 148) 00:30:49.976 9.446 - 9.502: 89.0340% ( 191) 00:30:49.976 9.502 - 9.558: 90.1599% ( 169) 00:30:49.976 9.558 - 9.614: 91.0393% ( 132) 00:30:49.976 9.614 - 9.670: 91.6056% ( 85) 00:30:49.976 9.670 - 9.726: 92.1252% ( 78) 00:30:49.976 9.726 - 9.782: 92.4584% ( 50) 00:30:49.976 9.782 - 9.838: 92.7249% ( 40) 00:30:49.976 9.838 - 9.893: 93.0313% ( 46) 00:30:49.976 9.893 - 9.949: 93.1912% ( 24) 00:30:49.976 9.949 - 10.005: 93.4444% ( 38) 00:30:49.976 10.005 - 10.061: 93.6109% ( 25) 00:30:49.976 10.061 - 10.117: 93.8041% ( 29) 00:30:49.976 10.117 - 10.173: 93.9507% ( 22) 00:30:49.976 10.173 - 10.229: 94.0706% ( 18) 00:30:49.976 10.229 - 10.285: 94.2172% ( 22) 00:30:49.976 10.285 - 10.341: 94.3638% ( 22) 00:30:49.976 10.341 - 10.397: 94.5037% ( 21) 00:30:49.976 10.397 - 10.452: 94.6769% ( 26) 00:30:49.976 10.452 - 10.508: 94.8301% ( 23) 00:30:49.976 10.508 - 10.564: 94.9567% ( 19) 00:30:49.976 10.564 - 10.620: 95.0366% ( 12) 00:30:49.976 10.620 - 10.676: 95.1699% ( 20) 00:30:49.976 10.676 - 10.732: 95.2831% ( 17) 00:30:49.976 10.732 - 10.788: 95.4097% ( 19) 00:30:49.976 10.788 - 10.844: 95.5097% ( 15) 00:30:49.976 10.844 - 10.900: 95.6296% ( 18) 00:30:49.976 10.900 - 10.955: 95.6762% ( 7) 00:30:49.976 10.955 - 11.011: 95.7628% ( 13) 00:30:49.976 11.011 - 11.067: 95.8294% ( 10) 00:30:49.976 11.067 - 11.123: 95.9161% ( 13) 00:30:49.976 11.123 - 11.179: 95.9760% ( 9) 00:30:49.976 11.179 - 11.235: 96.0626% ( 13) 00:30:49.976 11.235 - 11.291: 96.1159% ( 8) 00:30:49.976 11.291 - 11.347: 96.1492% ( 5) 00:30:49.976 11.347 - 11.403: 96.2225% ( 11) 00:30:49.976 11.403 - 11.459: 96.2891% ( 10) 00:30:49.976 11.459 - 11.514: 96.3558% ( 10) 00:30:49.976 11.514 - 11.570: 96.4157% ( 9) 00:30:49.976 11.570 - 11.626: 96.4557% ( 6) 00:30:49.976 11.626 - 11.682: 96.4823% ( 4) 00:30:49.976 11.682 - 11.738: 96.4890% ( 1) 00:30:49.976 11.738 - 11.794: 96.5157% ( 4) 00:30:49.976 11.794 - 11.850: 96.5490% ( 5) 00:30:49.976 11.850 - 11.906: 96.5556% ( 1) 00:30:49.976 11.906 - 11.962: 96.5956% ( 6) 00:30:49.976 11.962 - 12.017: 96.6422% ( 7) 00:30:49.976 12.017 - 12.073: 96.6822% ( 6) 00:30:49.976 12.129 - 12.185: 96.6955% ( 2) 00:30:49.976 12.185 - 12.241: 96.7155% ( 3) 00:30:49.976 12.241 - 12.297: 96.7488% ( 5) 00:30:49.977 12.297 - 12.353: 96.7622% ( 2) 00:30:49.977 12.353 - 12.409: 96.8021% ( 6) 00:30:49.977 12.409 - 12.465: 96.8155% ( 2) 00:30:49.977 12.465 - 12.521: 96.8221% ( 1) 00:30:49.977 12.521 - 12.576: 96.8288% ( 1) 00:30:49.977 12.576 - 12.632: 96.8488% ( 3) 00:30:49.977 12.688 - 12.744: 96.8554% ( 1) 00:30:49.977 12.744 - 12.800: 96.8887% ( 5) 00:30:49.977 12.800 - 12.856: 96.8954% ( 1) 00:30:49.977 12.856 - 12.912: 96.9354% ( 6) 00:30:49.977 12.912 - 12.968: 96.9487% ( 2) 00:30:49.977 13.024 - 13.079: 96.9753% ( 4) 00:30:49.977 13.079 - 13.135: 97.0020% ( 4) 00:30:49.977 13.135 - 13.191: 97.0220% ( 3) 00:30:49.977 13.191 - 13.247: 97.0286% ( 1) 00:30:49.977 13.247 - 13.303: 97.0353% ( 1) 00:30:49.977 13.303 - 13.359: 97.0486% ( 2) 00:30:49.977 13.359 - 13.415: 97.0686% ( 3) 00:30:49.977 13.415 - 13.471: 97.0953% ( 4) 00:30:49.977 13.471 - 13.527: 97.1153% ( 3) 00:30:49.977 13.527 - 13.583: 97.1219% ( 1) 00:30:49.977 13.583 - 13.638: 97.1486% ( 4) 00:30:49.977 13.638 - 13.694: 97.1686% ( 3) 00:30:49.977 13.694 - 13.750: 97.1952% ( 4) 00:30:49.977 13.750 - 13.806: 97.2219% ( 4) 00:30:49.977 13.806 - 13.862: 97.2552% ( 5) 00:30:49.977 13.862 - 13.918: 97.2885% ( 5) 00:30:49.977 13.918 - 13.974: 97.2951% ( 1) 00:30:49.977 13.974 - 14.030: 97.3284% ( 5) 00:30:49.977 14.030 - 14.086: 97.3618% ( 5) 00:30:49.977 14.086 - 14.141: 97.3751% ( 2) 00:30:49.977 14.141 - 14.197: 97.4017% ( 4) 00:30:49.977 14.197 - 14.253: 97.4350% ( 5) 00:30:49.977 14.253 - 14.309: 97.4750% ( 6) 00:30:49.977 14.309 - 14.421: 97.5017% ( 4) 00:30:49.977 14.421 - 14.533: 97.5283% ( 4) 00:30:49.977 14.533 - 14.645: 97.5949% ( 10) 00:30:49.977 14.645 - 14.756: 97.6282% ( 5) 00:30:49.977 14.756 - 14.868: 97.6882% ( 9) 00:30:49.977 14.868 - 14.980: 97.7015% ( 2) 00:30:49.977 14.980 - 15.092: 97.7282% ( 4) 00:30:49.977 15.092 - 15.203: 97.7748% ( 7) 00:30:49.977 15.203 - 15.315: 97.8215% ( 7) 00:30:49.977 15.315 - 15.427: 97.8414% ( 3) 00:30:49.977 15.427 - 15.539: 97.8748% ( 5) 00:30:49.977 15.539 - 15.651: 97.9480% ( 11) 00:30:49.977 15.651 - 15.762: 97.9747% ( 4) 00:30:49.977 15.762 - 15.874: 98.0147% ( 6) 00:30:49.977 15.874 - 15.986: 98.0346% ( 3) 00:30:49.977 15.986 - 16.098: 98.0413% ( 1) 00:30:49.977 16.098 - 16.210: 98.0813% ( 6) 00:30:49.977 16.210 - 16.321: 98.1146% ( 5) 00:30:49.977 16.321 - 16.433: 98.1279% ( 2) 00:30:49.977 16.433 - 16.545: 98.1479% ( 3) 00:30:49.977 16.545 - 16.657: 98.1812% ( 5) 00:30:49.977 16.657 - 16.769: 98.2145% ( 5) 00:30:49.977 16.769 - 16.880: 98.2811% ( 10) 00:30:49.977 16.880 - 16.992: 98.3678% ( 13) 00:30:49.977 16.992 - 17.104: 98.4344% ( 10) 00:30:49.977 17.104 - 17.216: 98.5143% ( 12) 00:30:49.977 17.216 - 17.328: 98.5543% ( 6) 00:30:49.977 17.328 - 17.439: 98.6342% ( 12) 00:30:49.977 17.439 - 17.551: 98.6809% ( 7) 00:30:49.977 17.551 - 17.663: 98.8075% ( 19) 00:30:49.977 17.663 - 17.775: 98.8874% ( 12) 00:30:49.977 17.775 - 17.886: 98.9807% ( 14) 00:30:49.977 17.886 - 17.998: 99.0673% ( 13) 00:30:49.977 17.998 - 18.110: 99.1139% ( 7) 00:30:49.977 18.110 - 18.222: 99.1672% ( 8) 00:30:49.977 18.222 - 18.334: 99.2338% ( 10) 00:30:49.977 18.334 - 18.445: 99.2672% ( 5) 00:30:49.977 18.445 - 18.557: 99.3738% ( 16) 00:30:49.977 18.557 - 18.669: 99.4270% ( 8) 00:30:49.977 18.669 - 18.781: 99.4404% ( 2) 00:30:49.977 18.781 - 18.893: 99.5003% ( 9) 00:30:49.977 18.893 - 19.004: 99.5336% ( 5) 00:30:49.977 19.004 - 19.116: 99.5803% ( 7) 00:30:49.977 19.116 - 19.228: 99.6136% ( 5) 00:30:49.977 19.228 - 19.340: 99.6269% ( 2) 00:30:49.977 19.340 - 19.452: 99.6336% ( 1) 00:30:49.977 19.452 - 19.563: 99.6602% ( 4) 00:30:49.977 19.563 - 19.675: 99.6736% ( 2) 00:30:49.977 19.675 - 19.787: 99.6869% ( 2) 00:30:49.977 19.787 - 19.899: 99.7069% ( 3) 00:30:49.977 20.010 - 20.122: 99.7135% ( 1) 00:30:49.977 20.122 - 20.234: 99.7335% ( 3) 00:30:49.977 20.234 - 20.346: 99.7402% ( 1) 00:30:49.977 20.346 - 20.458: 99.7535% ( 2) 00:30:49.977 20.458 - 20.569: 99.7668% ( 2) 00:30:49.977 20.569 - 20.681: 99.7735% ( 1) 00:30:49.977 20.681 - 20.793: 99.7868% ( 2) 00:30:49.977 21.687 - 21.799: 99.8001% ( 2) 00:30:49.977 21.799 - 21.911: 99.8068% ( 1) 00:30:49.977 22.246 - 22.358: 99.8135% ( 1) 00:30:49.977 22.358 - 22.470: 99.8201% ( 1) 00:30:49.977 22.582 - 22.693: 99.8268% ( 1) 00:30:49.977 23.141 - 23.252: 99.8334% ( 1) 00:30:49.977 23.252 - 23.364: 99.8401% ( 1) 00:30:49.977 23.364 - 23.476: 99.8468% ( 1) 00:30:49.977 23.588 - 23.700: 99.8601% ( 2) 00:30:49.977 23.700 - 23.811: 99.8801% ( 3) 00:30:49.977 23.923 - 24.035: 99.8867% ( 1) 00:30:49.977 24.259 - 24.370: 99.8934% ( 1) 00:30:49.977 24.370 - 24.482: 99.9001% ( 1) 00:30:49.977 24.482 - 24.594: 99.9067% ( 1) 00:30:49.977 24.706 - 24.817: 99.9134% ( 1) 00:30:49.977 24.929 - 25.041: 99.9201% ( 1) 00:30:49.977 25.153 - 25.265: 99.9267% ( 1) 00:30:49.977 25.935 - 26.047: 99.9334% ( 1) 00:30:49.977 26.047 - 26.159: 99.9400% ( 1) 00:30:49.977 26.718 - 26.830: 99.9467% ( 1) 00:30:49.977 27.389 - 27.500: 99.9534% ( 1) 00:30:49.977 30.631 - 30.854: 99.9600% ( 1) 00:30:49.977 33.090 - 33.314: 99.9667% ( 1) 00:30:49.977 38.009 - 38.232: 99.9734% ( 1) 00:30:49.977 43.822 - 44.045: 99.9800% ( 1) 00:30:49.977 47.176 - 47.399: 99.9867% ( 1) 00:30:49.977 52.989 - 53.212: 99.9933% ( 1) 00:30:49.977 1438.072 - 1445.226: 100.0000% ( 1) 00:30:49.977 00:30:49.977 00:30:49.977 real 0m1.335s 00:30:49.977 user 0m1.107s 00:30:49.977 sys 0m0.175s 00:30:49.977 17:27:27 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.977 17:27:27 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:30:49.977 ************************************ 00:30:49.977 END TEST nvme_overhead 00:30:49.977 ************************************ 00:30:49.977 17:27:27 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:49.977 17:27:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:49.977 17:27:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.977 17:27:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:49.977 ************************************ 00:30:49.977 START TEST nvme_arbitration 00:30:49.977 ************************************ 00:30:49.977 17:27:27 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:54.176 Initializing NVMe Controllers 00:30:54.176 Attached to 0000:00:10.0 00:30:54.176 Attached to 0000:00:11.0 00:30:54.176 Attached to 0000:00:13.0 00:30:54.176 Attached to 0000:00:12.0 00:30:54.176 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:54.176 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:30:54.176 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:30:54.176 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:30:54.176 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:30:54.176 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:30:54.176 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:54.176 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:54.176 Initialization complete. Launching workers. 00:30:54.176 Starting thread on core 1 with urgent priority queue 00:30:54.176 Starting thread on core 2 with urgent priority queue 00:30:54.176 Starting thread on core 0 with urgent priority queue 00:30:54.176 Starting thread on core 3 with urgent priority queue 00:30:54.176 QEMU NVMe Ctrl (12340 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:30:54.176 QEMU NVMe Ctrl (12342 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:30:54.176 QEMU NVMe Ctrl (12341 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:30:54.176 QEMU NVMe Ctrl (12342 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:30:54.176 QEMU NVMe Ctrl (12343 ) core 2: 426.67 IO/s 234.38 secs/100000 ios 00:30:54.176 QEMU NVMe Ctrl (12342 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:30:54.176 ======================================================== 00:30:54.176 00:30:54.176 00:30:54.176 real 0m3.472s 00:30:54.176 user 0m9.459s 00:30:54.176 sys 0m0.168s 00:30:54.176 17:27:30 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.176 17:27:30 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:30:54.176 ************************************ 00:30:54.176 END TEST nvme_arbitration 00:30:54.176 ************************************ 00:30:54.176 17:27:30 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:54.176 17:27:30 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:54.176 17:27:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.176 17:27:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:54.176 ************************************ 00:30:54.176 START TEST nvme_single_aen 00:30:54.176 ************************************ 00:30:54.176 17:27:30 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:54.176 Asynchronous Event Request test 00:30:54.176 Attached to 0000:00:10.0 00:30:54.176 Attached to 0000:00:11.0 00:30:54.176 Attached to 0000:00:13.0 00:30:54.176 Attached to 0000:00:12.0 00:30:54.176 Reset controller to setup AER completions for this process 00:30:54.176 Registering asynchronous event callbacks... 00:30:54.176 Getting orig temperature thresholds of all controllers 00:30:54.176 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:54.176 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:54.176 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:54.176 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:54.176 Setting all controllers temperature threshold low to trigger AER 00:30:54.176 Waiting for all controllers temperature threshold to be set lower 00:30:54.176 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:54.176 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:30:54.176 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:54.176 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:30:54.176 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:54.176 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:30:54.176 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:54.176 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:30:54.176 Waiting for all controllers to trigger AER and reset threshold 00:30:54.176 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:54.176 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:54.176 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:54.176 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:54.176 Cleaning up... 00:30:54.176 00:30:54.176 real 0m0.339s 00:30:54.176 user 0m0.117s 00:30:54.176 sys 0m0.166s 00:30:54.176 17:27:31 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.176 17:27:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:30:54.176 ************************************ 00:30:54.176 END TEST nvme_single_aen 00:30:54.176 ************************************ 00:30:54.176 17:27:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:54.176 17:27:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:54.176 17:27:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.176 17:27:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:54.176 ************************************ 00:30:54.176 START TEST nvme_doorbell_aers 00:30:54.176 ************************************ 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:54.176 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:54.177 17:27:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:30:54.434 [2024-11-26 17:27:31.684816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:04.398 Executing: test_write_invalid_db 00:31:04.398 Waiting for AER completion... 00:31:04.398 Failure: test_write_invalid_db 00:31:04.398 00:31:04.398 Executing: test_invalid_db_write_overflow_sq 00:31:04.398 Waiting for AER completion... 00:31:04.398 Failure: test_invalid_db_write_overflow_sq 00:31:04.398 00:31:04.398 Executing: test_invalid_db_write_overflow_cq 00:31:04.398 Waiting for AER completion... 00:31:04.398 Failure: test_invalid_db_write_overflow_cq 00:31:04.398 00:31:04.398 17:27:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:04.398 17:27:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:04.398 [2024-11-26 17:27:41.678235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:14.456 Executing: test_write_invalid_db 00:31:14.456 Waiting for AER completion... 00:31:14.456 Failure: test_write_invalid_db 00:31:14.456 00:31:14.456 Executing: test_invalid_db_write_overflow_sq 00:31:14.456 Waiting for AER completion... 00:31:14.456 Failure: test_invalid_db_write_overflow_sq 00:31:14.456 00:31:14.456 Executing: test_invalid_db_write_overflow_cq 00:31:14.456 Waiting for AER completion... 00:31:14.456 Failure: test_invalid_db_write_overflow_cq 00:31:14.456 00:31:14.456 17:27:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:14.456 17:27:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:14.456 [2024-11-26 17:27:51.712595] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:24.430 Executing: test_write_invalid_db 00:31:24.430 Waiting for AER completion... 00:31:24.430 Failure: test_write_invalid_db 00:31:24.430 00:31:24.430 Executing: test_invalid_db_write_overflow_sq 00:31:24.430 Waiting for AER completion... 00:31:24.430 Failure: test_invalid_db_write_overflow_sq 00:31:24.430 00:31:24.430 Executing: test_invalid_db_write_overflow_cq 00:31:24.430 Waiting for AER completion... 00:31:24.430 Failure: test_invalid_db_write_overflow_cq 00:31:24.430 00:31:24.430 17:28:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:24.430 17:28:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:24.430 [2024-11-26 17:28:01.841008] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.403 Executing: test_write_invalid_db 00:31:34.403 Waiting for AER completion... 00:31:34.403 Failure: test_write_invalid_db 00:31:34.403 00:31:34.403 Executing: test_invalid_db_write_overflow_sq 00:31:34.403 Waiting for AER completion... 00:31:34.403 Failure: test_invalid_db_write_overflow_sq 00:31:34.403 00:31:34.403 Executing: test_invalid_db_write_overflow_cq 00:31:34.403 Waiting for AER completion... 00:31:34.403 Failure: test_invalid_db_write_overflow_cq 00:31:34.403 00:31:34.403 00:31:34.403 real 0m40.288s 00:31:34.403 user 0m33.350s 00:31:34.403 sys 0m6.545s 00:31:34.403 17:28:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.403 17:28:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:31:34.403 ************************************ 00:31:34.403 END TEST nvme_doorbell_aers 00:31:34.403 ************************************ 00:31:34.403 17:28:11 nvme -- nvme/nvme.sh@97 -- # uname 00:31:34.403 17:28:11 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:34.403 17:28:11 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:34.403 17:28:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:34.403 17:28:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.403 17:28:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:34.403 ************************************ 00:31:34.403 START TEST nvme_multi_aen 00:31:34.403 ************************************ 00:31:34.403 17:28:11 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:34.661 [2024-11-26 17:28:11.926376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.926518] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.926535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.928020] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.928074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.928088] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.929178] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.929222] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.929235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.930270] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.930308] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 [2024-11-26 17:28:11.930321] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64981) is not found. Dropping the request. 00:31:34.662 Child process pid: 65501 00:31:34.921 [Child] Asynchronous Event Request test 00:31:34.921 [Child] Attached to 0000:00:10.0 00:31:34.921 [Child] Attached to 0000:00:11.0 00:31:34.921 [Child] Attached to 0000:00:13.0 00:31:34.921 [Child] Attached to 0000:00:12.0 00:31:34.921 [Child] Registering asynchronous event callbacks... 00:31:34.921 [Child] Getting orig temperature thresholds of all controllers 00:31:34.921 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.921 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.921 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.921 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.921 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:34.921 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.921 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.921 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.921 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.921 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.921 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.921 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.922 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.922 [Child] Cleaning up... 00:31:34.922 Asynchronous Event Request test 00:31:34.922 Attached to 0000:00:10.0 00:31:34.922 Attached to 0000:00:11.0 00:31:34.922 Attached to 0000:00:13.0 00:31:34.922 Attached to 0000:00:12.0 00:31:34.922 Reset controller to setup AER completions for this process 00:31:34.922 Registering asynchronous event callbacks... 00:31:34.922 Getting orig temperature thresholds of all controllers 00:31:34.922 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.922 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.922 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.922 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:34.922 Setting all controllers temperature threshold low to trigger AER 00:31:34.922 Waiting for all controllers temperature threshold to be set lower 00:31:34.922 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.922 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:31:34.922 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.922 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:31:34.922 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.922 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:31:34.922 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:34.922 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:31:34.922 Waiting for all controllers to trigger AER and reset threshold 00:31:34.922 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.922 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.922 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.922 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:34.922 Cleaning up... 00:31:34.922 00:31:34.922 real 0m0.672s 00:31:34.922 user 0m0.216s 00:31:34.922 sys 0m0.344s 00:31:34.922 17:28:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.922 17:28:12 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:31:34.922 ************************************ 00:31:34.922 END TEST nvme_multi_aen 00:31:34.922 ************************************ 00:31:34.922 17:28:12 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:34.922 17:28:12 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:34.922 17:28:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.922 17:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:34.922 ************************************ 00:31:34.922 START TEST nvme_startup 00:31:34.922 ************************************ 00:31:34.922 17:28:12 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:35.490 Initializing NVMe Controllers 00:31:35.490 Attached to 0000:00:10.0 00:31:35.490 Attached to 0000:00:11.0 00:31:35.490 Attached to 0000:00:13.0 00:31:35.490 Attached to 0000:00:12.0 00:31:35.490 Initialization complete. 00:31:35.490 Time used:262217.938 (us). 00:31:35.490 00:31:35.490 real 0m0.377s 00:31:35.490 user 0m0.178s 00:31:35.490 sys 0m0.154s 00:31:35.490 17:28:12 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.490 17:28:12 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:31:35.490 ************************************ 00:31:35.490 END TEST nvme_startup 00:31:35.490 ************************************ 00:31:35.490 17:28:12 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:35.490 17:28:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:35.490 17:28:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.490 17:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:35.490 ************************************ 00:31:35.490 START TEST nvme_multi_secondary 00:31:35.490 ************************************ 00:31:35.490 17:28:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:31:35.490 17:28:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65558 00:31:35.490 17:28:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:35.490 17:28:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65559 00:31:35.490 17:28:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:35.490 17:28:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:38.770 Initializing NVMe Controllers 00:31:38.770 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:38.770 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:38.770 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:38.770 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:38.770 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:38.770 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:38.770 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:38.770 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:38.770 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:38.770 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:38.770 Initialization complete. Launching workers. 00:31:38.770 ======================================================== 00:31:38.770 Latency(us) 00:31:38.770 Device Information : IOPS MiB/s Average min max 00:31:38.770 PCIE (0000:00:10.0) NSID 1 from core 1: 5119.49 20.00 3122.55 870.79 21771.71 00:31:38.770 PCIE (0000:00:11.0) NSID 1 from core 1: 5119.49 20.00 3124.68 873.17 21691.46 00:31:38.770 PCIE (0000:00:13.0) NSID 1 from core 1: 5119.49 20.00 3124.60 909.13 21321.95 00:31:38.770 PCIE (0000:00:12.0) NSID 1 from core 1: 5119.49 20.00 3124.58 883.25 21192.29 00:31:38.770 PCIE (0000:00:12.0) NSID 2 from core 1: 5119.49 20.00 3124.55 886.18 21819.58 00:31:38.770 PCIE (0000:00:12.0) NSID 3 from core 1: 5124.82 20.02 3121.40 910.76 21585.61 00:31:38.770 ======================================================== 00:31:38.770 Total : 30722.28 120.01 3123.73 870.79 21819.58 00:31:38.770 00:31:39.028 Initializing NVMe Controllers 00:31:39.028 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:39.028 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:39.028 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:39.028 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:39.028 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:39.028 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:39.028 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:39.028 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:39.028 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:39.028 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:39.028 Initialization complete. Launching workers. 00:31:39.028 ======================================================== 00:31:39.028 Latency(us) 00:31:39.028 Device Information : IOPS MiB/s Average min max 00:31:39.028 PCIE (0000:00:10.0) NSID 1 from core 2: 2712.41 10.60 5895.60 1179.43 39753.98 00:31:39.028 PCIE (0000:00:11.0) NSID 1 from core 2: 2712.41 10.60 5897.69 1094.29 40288.12 00:31:39.028 PCIE (0000:00:13.0) NSID 1 from core 2: 2712.41 10.60 5897.71 1064.56 40193.19 00:31:39.028 PCIE (0000:00:12.0) NSID 1 from core 2: 2712.41 10.60 5897.36 1202.34 40608.41 00:31:39.028 PCIE (0000:00:12.0) NSID 2 from core 2: 2712.41 10.60 5897.85 1069.00 40625.93 00:31:39.028 PCIE (0000:00:12.0) NSID 3 from core 2: 2712.41 10.60 5897.68 1178.55 40738.86 00:31:39.028 ======================================================== 00:31:39.028 Total : 16274.46 63.57 5897.32 1064.56 40738.86 00:31:39.028 00:31:39.286 17:28:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65558 00:31:41.261 Initializing NVMe Controllers 00:31:41.261 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:41.261 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:41.261 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:41.261 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:41.261 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:41.261 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:41.261 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:41.261 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:41.261 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:41.261 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:41.261 Initialization complete. Launching workers. 00:31:41.261 ======================================================== 00:31:41.261 Latency(us) 00:31:41.261 Device Information : IOPS MiB/s Average min max 00:31:41.261 PCIE (0000:00:10.0) NSID 1 from core 0: 6933.48 27.08 2305.23 852.04 21037.68 00:31:41.261 PCIE (0000:00:11.0) NSID 1 from core 0: 6933.48 27.08 2306.95 872.62 20980.34 00:31:41.261 PCIE (0000:00:13.0) NSID 1 from core 0: 6933.48 27.08 2306.91 849.68 20819.29 00:31:41.261 PCIE (0000:00:12.0) NSID 1 from core 0: 6933.48 27.08 2306.85 856.39 20491.83 00:31:41.261 PCIE (0000:00:12.0) NSID 2 from core 0: 6933.48 27.08 2306.82 846.61 20334.77 00:31:41.261 PCIE (0000:00:12.0) NSID 3 from core 0: 6933.48 27.08 2306.78 839.32 20964.20 00:31:41.261 ======================================================== 00:31:41.261 Total : 41600.89 162.50 2306.59 839.32 21037.68 00:31:41.261 00:31:41.261 17:28:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65559 00:31:41.261 17:28:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65628 00:31:41.261 17:28:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:41.261 17:28:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65629 00:31:41.261 17:28:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:41.261 17:28:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:44.550 Initializing NVMe Controllers 00:31:44.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:44.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:44.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:44.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:44.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:44.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:44.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:44.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:44.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:44.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:44.550 Initialization complete. Launching workers. 00:31:44.550 ======================================================== 00:31:44.550 Latency(us) 00:31:44.550 Device Information : IOPS MiB/s Average min max 00:31:44.550 PCIE (0000:00:10.0) NSID 1 from core 0: 4786.90 18.70 3339.45 885.71 7871.22 00:31:44.550 PCIE (0000:00:11.0) NSID 1 from core 0: 4786.90 18.70 3341.99 908.19 8112.91 00:31:44.550 PCIE (0000:00:13.0) NSID 1 from core 0: 4786.90 18.70 3342.17 930.21 8276.95 00:31:44.550 PCIE (0000:00:12.0) NSID 1 from core 0: 4786.90 18.70 3342.27 902.36 8797.78 00:31:44.550 PCIE (0000:00:12.0) NSID 2 from core 0: 4786.90 18.70 3342.27 943.34 8306.84 00:31:44.550 PCIE (0000:00:12.0) NSID 3 from core 0: 4786.90 18.70 3342.29 935.97 8025.41 00:31:44.550 ======================================================== 00:31:44.550 Total : 28721.41 112.19 3341.74 885.71 8797.78 00:31:44.550 00:31:44.550 Initializing NVMe Controllers 00:31:44.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:44.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:44.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:44.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:44.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:44.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:44.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:44.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:44.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:44.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:44.550 Initialization complete. Launching workers. 00:31:44.550 ======================================================== 00:31:44.550 Latency(us) 00:31:44.550 Device Information : IOPS MiB/s Average min max 00:31:44.550 PCIE (0000:00:10.0) NSID 1 from core 1: 5071.28 19.81 3152.25 1198.89 10398.35 00:31:44.550 PCIE (0000:00:11.0) NSID 1 from core 1: 5071.28 19.81 3154.23 1261.20 10173.89 00:31:44.550 PCIE (0000:00:13.0) NSID 1 from core 1: 5071.28 19.81 3154.11 1135.93 9640.56 00:31:44.550 PCIE (0000:00:12.0) NSID 1 from core 1: 5071.28 19.81 3153.99 1176.87 9524.20 00:31:44.550 PCIE (0000:00:12.0) NSID 2 from core 1: 5071.28 19.81 3153.92 1257.75 8867.27 00:31:44.550 PCIE (0000:00:12.0) NSID 3 from core 1: 5071.28 19.81 3153.83 1214.77 9940.88 00:31:44.550 ======================================================== 00:31:44.550 Total : 30427.66 118.86 3153.72 1135.93 10398.35 00:31:44.550 00:31:46.454 Initializing NVMe Controllers 00:31:46.454 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:46.454 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:46.454 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:46.454 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:46.454 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:46.455 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:46.455 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:46.455 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:46.455 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:46.455 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:46.455 Initialization complete. Launching workers. 00:31:46.455 ======================================================== 00:31:46.455 Latency(us) 00:31:46.455 Device Information : IOPS MiB/s Average min max 00:31:46.455 PCIE (0000:00:10.0) NSID 1 from core 2: 2933.22 11.46 5451.50 976.68 16615.97 00:31:46.455 PCIE (0000:00:11.0) NSID 1 from core 2: 2933.22 11.46 5453.83 972.08 19746.02 00:31:46.455 PCIE (0000:00:13.0) NSID 1 from core 2: 2933.22 11.46 5453.97 983.91 20947.38 00:31:46.455 PCIE (0000:00:12.0) NSID 1 from core 2: 2933.22 11.46 5453.26 966.43 19872.16 00:31:46.455 PCIE (0000:00:12.0) NSID 2 from core 2: 2933.22 11.46 5452.57 939.47 20929.50 00:31:46.455 PCIE (0000:00:12.0) NSID 3 from core 2: 2933.22 11.46 5448.57 943.69 16461.86 00:31:46.455 ======================================================== 00:31:46.455 Total : 17599.35 68.75 5452.28 939.47 20947.38 00:31:46.455 00:31:46.714 17:28:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65628 00:31:46.714 17:28:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65629 00:31:46.715 00:31:46.715 real 0m11.155s 00:31:46.715 user 0m18.490s 00:31:46.715 sys 0m1.180s 00:31:46.715 17:28:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.715 17:28:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:31:46.715 ************************************ 00:31:46.715 END TEST nvme_multi_secondary 00:31:46.715 ************************************ 00:31:46.715 17:28:24 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:46.715 17:28:24 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:31:46.715 17:28:24 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64549 ]] 00:31:46.715 17:28:24 nvme -- common/autotest_common.sh@1094 -- # kill 64549 00:31:46.715 17:28:24 nvme -- common/autotest_common.sh@1095 -- # wait 64549 00:31:46.715 [2024-11-26 17:28:24.018924] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.019104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.019192] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.019236] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.025286] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.025395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.025425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.025455] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.031044] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.031164] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.031198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.031234] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.036033] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.036119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.036140] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.715 [2024-11-26 17:28:24.036163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65491) is not found. Dropping the request. 00:31:46.975 17:28:24 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:31:46.975 17:28:24 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:31:46.975 17:28:24 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:46.975 17:28:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:46.975 17:28:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.975 17:28:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:46.975 ************************************ 00:31:46.975 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:46.975 ************************************ 00:31:46.975 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:46.975 * Looking for test storage... 00:31:46.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:46.975 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:46.975 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:31:46.975 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.234 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.234 --rc genhtml_branch_coverage=1 00:31:47.235 --rc genhtml_function_coverage=1 00:31:47.235 --rc genhtml_legend=1 00:31:47.235 --rc geninfo_all_blocks=1 00:31:47.235 --rc geninfo_unexecuted_blocks=1 00:31:47.235 00:31:47.235 ' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.235 --rc genhtml_branch_coverage=1 00:31:47.235 --rc genhtml_function_coverage=1 00:31:47.235 --rc genhtml_legend=1 00:31:47.235 --rc geninfo_all_blocks=1 00:31:47.235 --rc geninfo_unexecuted_blocks=1 00:31:47.235 00:31:47.235 ' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.235 --rc genhtml_branch_coverage=1 00:31:47.235 --rc genhtml_function_coverage=1 00:31:47.235 --rc genhtml_legend=1 00:31:47.235 --rc geninfo_all_blocks=1 00:31:47.235 --rc geninfo_unexecuted_blocks=1 00:31:47.235 00:31:47.235 ' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.235 --rc genhtml_branch_coverage=1 00:31:47.235 --rc genhtml_function_coverage=1 00:31:47.235 --rc genhtml_legend=1 00:31:47.235 --rc geninfo_all_blocks=1 00:31:47.235 --rc geninfo_unexecuted_blocks=1 00:31:47.235 00:31:47.235 ' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65792 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65792 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65792 ']' 00:31:47.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.235 17:28:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:47.494 [2024-11-26 17:28:24.733576] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:47.494 [2024-11-26 17:28:24.733740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65792 ] 00:31:47.494 [2024-11-26 17:28:24.936591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:47.754 [2024-11-26 17:28:25.080538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.754 [2024-11-26 17:28:25.080718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:47.754 [2024-11-26 17:28:25.080854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.754 [2024-11-26 17:28:25.080901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:48.775 nvme0n1 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_By9u6.txt 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:48.775 true 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732642106 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65826 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:48.775 17:28:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:51.357 [2024-11-26 17:28:28.229569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:31:51.357 [2024-11-26 17:28:28.232166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.357 [2024-11-26 17:28:28.232227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:51.357 [2024-11-26 17:28:28.232245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.357 [2024-11-26 17:28:28.234687] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65826 00:31:51.357 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65826 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65826 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_By9u6.txt 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_By9u6.txt 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65792 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65792 ']' 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65792 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65792 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.357 killing process with pid 65792 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65792' 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65792 00:31:51.357 17:28:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65792 00:31:54.702 17:28:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:54.702 17:28:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:54.702 00:31:54.702 real 0m7.305s 00:31:54.702 user 0m25.697s 00:31:54.702 sys 0m0.816s 00:31:54.702 17:28:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.702 17:28:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:54.702 ************************************ 00:31:54.702 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:54.702 ************************************ 00:31:54.702 17:28:31 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:54.702 17:28:31 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:54.702 17:28:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:54.702 17:28:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.702 17:28:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:54.702 ************************************ 00:31:54.702 START TEST nvme_fio 00:31:54.702 ************************************ 00:31:54.702 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:31:54.702 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:54.702 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:54.703 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:31:54.703 17:28:31 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:54.703 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:31:54.703 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:54.703 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:54.703 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:54.703 17:28:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:54.703 17:28:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:54.703 17:28:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:54.962 17:28:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:54.962 17:28:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:54.962 17:28:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:55.222 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:55.222 fio-3.35 00:31:55.222 Starting 1 thread 00:32:00.498 00:32:00.498 test: (groupid=0, jobs=1): err= 0: pid=65985: Tue Nov 26 17:28:37 2024 00:32:00.498 read: IOPS=19.9k, BW=77.6MiB/s (81.3MB/s)(155MiB/2001msec) 00:32:00.498 slat (nsec): min=4424, max=86773, avg=5867.24, stdev=2474.32 00:32:00.498 clat (usec): min=250, max=12625, avg=3216.74, stdev=1056.01 00:32:00.498 lat (usec): min=256, max=12699, avg=3222.60, stdev=1057.46 00:32:00.498 clat percentiles (usec): 00:32:00.498 | 1.00th=[ 2089], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:32:00.498 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:32:00.498 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3785], 95.00th=[ 5407], 00:32:00.498 | 99.00th=[ 8356], 99.50th=[ 8979], 99.90th=[10290], 99.95th=[11207], 00:32:00.498 | 99.99th=[12387] 00:32:00.498 bw ( KiB/s): min=68552, max=84456, per=99.12%, avg=78730.67, stdev=8837.90, samples=3 00:32:00.498 iops : min=17138, max=21114, avg=19682.67, stdev=2209.47, samples=3 00:32:00.498 write: IOPS=19.8k, BW=77.3MiB/s (81.1MB/s)(155MiB/2001msec); 0 zone resets 00:32:00.498 slat (nsec): min=4550, max=95706, avg=6162.44, stdev=2422.43 00:32:00.498 clat (usec): min=213, max=12531, avg=3216.01, stdev=1049.18 00:32:00.498 lat (usec): min=219, max=12537, avg=3222.17, stdev=1050.59 00:32:00.498 clat percentiles (usec): 00:32:00.498 | 1.00th=[ 2057], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:32:00.498 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:32:00.498 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3752], 95.00th=[ 5342], 00:32:00.498 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[10683], 99.95th=[11469], 00:32:00.498 | 99.99th=[12256] 00:32:00.498 bw ( KiB/s): min=68664, max=84688, per=99.50%, avg=78805.33, stdev=8820.11, samples=3 00:32:00.498 iops : min=17166, max=21172, avg=19701.33, stdev=2205.03, samples=3 00:32:00.498 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:00.498 lat (msec) : 2=0.84%, 4=90.35%, 10=8.61%, 20=0.15% 00:32:00.498 cpu : usr=98.95%, sys=0.15%, ctx=4, majf=0, minf=606 00:32:00.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:00.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:00.498 issued rwts: total=39733,39620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:00.498 00:32:00.498 Run status group 0 (all jobs): 00:32:00.498 READ: bw=77.6MiB/s (81.3MB/s), 77.6MiB/s-77.6MiB/s (81.3MB/s-81.3MB/s), io=155MiB (163MB), run=2001-2001msec 00:32:00.498 WRITE: bw=77.3MiB/s (81.1MB/s), 77.3MiB/s-77.3MiB/s (81.1MB/s-81.1MB/s), io=155MiB (162MB), run=2001-2001msec 00:32:00.757 ----------------------------------------------------- 00:32:00.757 Suppressions used: 00:32:00.757 count bytes template 00:32:00.757 1 32 /usr/src/fio/parse.c 00:32:00.757 1 8 libtcmalloc_minimal.so 00:32:00.757 ----------------------------------------------------- 00:32:00.757 00:32:00.757 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:00.757 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:00.757 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:00.757 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:32:01.016 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:32:01.016 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:01.276 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:01.276 17:28:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:01.276 17:28:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:32:01.535 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:01.535 fio-3.35 00:32:01.535 Starting 1 thread 00:32:08.107 00:32:08.107 test: (groupid=0, jobs=1): err= 0: pid=66052: Tue Nov 26 17:28:44 2024 00:32:08.107 read: IOPS=20.9k, BW=81.6MiB/s (85.5MB/s)(163MiB/2001msec) 00:32:08.107 slat (nsec): min=3938, max=62722, avg=5483.47, stdev=2641.46 00:32:08.107 clat (usec): min=204, max=52731, avg=2977.09, stdev=1979.08 00:32:08.107 lat (usec): min=208, max=52736, avg=2982.57, stdev=1979.89 00:32:08.107 clat percentiles (usec): 00:32:08.107 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2606], 00:32:08.107 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:32:08.107 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 3097], 95.00th=[ 5080], 00:32:08.107 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[47449], 99.95th=[47973], 00:32:08.107 | 99.99th=[48497] 00:32:08.107 bw ( KiB/s): min=69128, max=88152, per=95.22%, avg=79519.00, stdev=9633.07, samples=3 00:32:08.107 iops : min=17282, max=22038, avg=19879.67, stdev=2408.25, samples=3 00:32:08.107 write: IOPS=20.8k, BW=81.2MiB/s (85.2MB/s)(163MiB/2001msec); 0 zone resets 00:32:08.107 slat (nsec): min=4201, max=67067, avg=6135.93, stdev=2722.13 00:32:08.107 clat (usec): min=213, max=57329, avg=3142.68, stdev=3364.49 00:32:08.107 lat (usec): min=218, max=57334, avg=3148.82, stdev=3364.98 00:32:08.107 clat percentiles (usec): 00:32:08.107 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:32:08.107 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:32:08.107 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 3195], 95.00th=[ 5145], 00:32:08.107 | 99.00th=[ 7701], 99.50th=[ 9765], 99.90th=[54789], 99.95th=[55313], 00:32:08.107 | 99.99th=[57410] 00:32:08.107 bw ( KiB/s): min=69304, max=88168, per=95.65%, avg=79540.33, stdev=9534.33, samples=3 00:32:08.107 iops : min=17326, max=22042, avg=19885.00, stdev=2383.56, samples=3 00:32:08.107 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:32:08.107 lat (msec) : 2=0.50%, 4=92.30%, 10=6.81%, 20=0.04%, 50=0.17% 00:32:08.107 lat (msec) : 100=0.14% 00:32:08.107 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=607 00:32:08.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:08.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.107 issued rwts: total=41775,41601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.107 00:32:08.107 Run status group 0 (all jobs): 00:32:08.107 READ: bw=81.6MiB/s (85.5MB/s), 81.6MiB/s-81.6MiB/s (85.5MB/s-85.5MB/s), io=163MiB (171MB), run=2001-2001msec 00:32:08.107 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=163MiB (170MB), run=2001-2001msec 00:32:08.107 ----------------------------------------------------- 00:32:08.107 Suppressions used: 00:32:08.107 count bytes template 00:32:08.107 1 32 /usr/src/fio/parse.c 00:32:08.107 1 8 libtcmalloc_minimal.so 00:32:08.107 ----------------------------------------------------- 00:32:08.107 00:32:08.107 17:28:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:08.107 17:28:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:08.107 17:28:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:32:08.107 17:28:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:08.107 17:28:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:32:08.107 17:28:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:08.107 17:28:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:08.107 17:28:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:08.107 17:28:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:08.367 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:08.367 fio-3.35 00:32:08.367 Starting 1 thread 00:32:14.936 00:32:14.936 test: (groupid=0, jobs=1): err= 0: pid=66128: Tue Nov 26 17:28:51 2024 00:32:14.936 read: IOPS=20.2k, BW=78.9MiB/s (82.7MB/s)(158MiB/2001msec) 00:32:14.936 slat (nsec): min=3920, max=96478, avg=6103.54, stdev=1999.55 00:32:14.936 clat (usec): min=221, max=11898, avg=3146.42, stdev=652.24 00:32:14.936 lat (usec): min=226, max=11994, avg=3152.53, stdev=653.46 00:32:14.936 clat percentiles (usec): 00:32:14.936 | 1.00th=[ 2540], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2868], 00:32:14.936 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:32:14.936 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3818], 00:32:14.936 | 99.00th=[ 6849], 99.50th=[ 7963], 99.90th=[ 8455], 99.95th=[ 8979], 00:32:14.936 | 99.99th=[11600] 00:32:14.936 bw ( KiB/s): min=74112, max=81192, per=97.01%, avg=78360.00, stdev=3746.38, samples=3 00:32:14.936 iops : min=18528, max=20298, avg=19590.00, stdev=936.60, samples=3 00:32:14.936 write: IOPS=20.2k, BW=78.7MiB/s (82.5MB/s)(158MiB/2001msec); 0 zone resets 00:32:14.936 slat (nsec): min=4104, max=91214, avg=6437.44, stdev=2033.89 00:32:14.936 clat (usec): min=258, max=11721, avg=3164.07, stdev=673.59 00:32:14.936 lat (usec): min=264, max=11738, avg=3170.51, stdev=674.79 00:32:14.936 clat percentiles (usec): 00:32:14.936 | 1.00th=[ 2540], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2900], 00:32:14.936 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:32:14.936 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3851], 00:32:14.936 | 99.00th=[ 7177], 99.50th=[ 7963], 99.90th=[ 8586], 99.95th=[ 9241], 00:32:14.936 | 99.99th=[11338] 00:32:14.936 bw ( KiB/s): min=74296, max=81440, per=97.36%, avg=78474.67, stdev=3723.35, samples=3 00:32:14.936 iops : min=18574, max=20360, avg=19618.67, stdev=930.84, samples=3 00:32:14.936 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:14.936 lat (msec) : 2=0.10%, 4=96.11%, 10=3.71%, 20=0.04% 00:32:14.936 cpu : usr=99.20%, sys=0.05%, ctx=3, majf=0, minf=606 00:32:14.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:14.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:14.936 issued rwts: total=40407,40323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:14.936 00:32:14.936 Run status group 0 (all jobs): 00:32:14.936 READ: bw=78.9MiB/s (82.7MB/s), 78.9MiB/s-78.9MiB/s (82.7MB/s-82.7MB/s), io=158MiB (166MB), run=2001-2001msec 00:32:14.936 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=158MiB (165MB), run=2001-2001msec 00:32:14.936 ----------------------------------------------------- 00:32:14.936 Suppressions used: 00:32:14.936 count bytes template 00:32:14.936 1 32 /usr/src/fio/parse.c 00:32:14.936 1 8 libtcmalloc_minimal.so 00:32:14.936 ----------------------------------------------------- 00:32:14.936 00:32:14.936 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:14.936 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:14.936 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:32:14.936 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:15.194 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:32:15.194 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:15.454 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:15.454 17:28:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:15.454 17:28:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:15.714 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:15.714 fio-3.35 00:32:15.714 Starting 1 thread 00:32:27.927 00:32:27.927 test: (groupid=0, jobs=1): err= 0: pid=66195: Tue Nov 26 17:29:04 2024 00:32:27.927 read: IOPS=23.3k, BW=91.0MiB/s (95.4MB/s)(182MiB/2001msec) 00:32:27.927 slat (nsec): min=3770, max=62469, avg=4986.04, stdev=1640.35 00:32:27.927 clat (usec): min=226, max=10554, avg=2728.00, stdev=519.38 00:32:27.927 lat (usec): min=231, max=10559, avg=2732.99, stdev=520.19 00:32:27.927 clat percentiles (usec): 00:32:27.927 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540], 00:32:27.927 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:32:27.927 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 3032], 00:32:27.927 | 99.00th=[ 5276], 99.50th=[ 6849], 99.90th=[ 8029], 99.95th=[ 8586], 00:32:27.927 | 99.99th=[ 9896] 00:32:27.927 bw ( KiB/s): min=89144, max=94384, per=98.20%, avg=91490.67, stdev=2662.43, samples=3 00:32:27.927 iops : min=22286, max=23596, avg=22872.67, stdev=665.61, samples=3 00:32:27.927 write: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(181MiB/2001msec); 0 zone resets 00:32:27.927 slat (nsec): min=3892, max=55729, avg=5600.75, stdev=1710.34 00:32:27.927 clat (usec): min=259, max=12322, avg=2757.69, stdev=642.08 00:32:27.927 lat (usec): min=264, max=12327, avg=2763.29, stdev=642.80 00:32:27.927 clat percentiles (usec): 00:32:27.927 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2540], 00:32:27.927 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:32:27.927 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 3097], 00:32:27.927 | 99.00th=[ 5932], 99.50th=[ 7504], 99.90th=[12125], 99.95th=[12125], 00:32:27.927 | 99.99th=[12256] 00:32:27.927 bw ( KiB/s): min=88752, max=95736, per=99.01%, avg=91624.00, stdev=3653.39, samples=3 00:32:27.927 iops : min=22188, max=23934, avg=22906.00, stdev=913.35, samples=3 00:32:27.927 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:32:27.927 lat (msec) : 2=0.75%, 4=97.07%, 10=2.07%, 20=0.07% 00:32:27.927 cpu : usr=99.40%, sys=0.05%, ctx=8, majf=0, minf=605 00:32:27.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:27.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.927 issued rwts: total=46608,46292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.927 00:32:27.927 Run status group 0 (all jobs): 00:32:27.927 READ: bw=91.0MiB/s (95.4MB/s), 91.0MiB/s-91.0MiB/s (95.4MB/s-95.4MB/s), io=182MiB (191MB), run=2001-2001msec 00:32:27.927 WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=181MiB (190MB), run=2001-2001msec 00:32:27.927 ----------------------------------------------------- 00:32:27.927 Suppressions used: 00:32:27.927 count bytes template 00:32:27.927 1 32 /usr/src/fio/parse.c 00:32:27.927 1 8 libtcmalloc_minimal.so 00:32:27.927 ----------------------------------------------------- 00:32:27.927 00:32:27.927 17:29:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:27.927 17:29:04 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:32:27.927 00:32:27.927 real 0m32.783s 00:32:27.927 user 0m20.395s 00:32:27.927 sys 0m21.345s 00:32:27.927 17:29:04 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.927 17:29:04 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:32:27.927 ************************************ 00:32:27.927 END TEST nvme_fio 00:32:27.927 ************************************ 00:32:27.927 ************************************ 00:32:27.927 END TEST nvme 00:32:27.927 ************************************ 00:32:27.927 00:32:27.927 real 1m50.448s 00:32:27.927 user 3m58.538s 00:32:27.927 sys 0m36.788s 00:32:27.927 17:29:04 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.927 17:29:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:27.927 17:29:04 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:32:27.927 17:29:04 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:27.927 17:29:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.927 17:29:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.927 17:29:04 -- common/autotest_common.sh@10 -- # set +x 00:32:27.927 ************************************ 00:32:27.927 START TEST nvme_scc 00:32:27.927 ************************************ 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:27.927 * Looking for test storage... 00:32:27.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@345 -- # : 1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@368 -- # return 0 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:27.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.927 --rc genhtml_branch_coverage=1 00:32:27.927 --rc genhtml_function_coverage=1 00:32:27.927 --rc genhtml_legend=1 00:32:27.927 --rc geninfo_all_blocks=1 00:32:27.927 --rc geninfo_unexecuted_blocks=1 00:32:27.927 00:32:27.927 ' 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:27.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.927 --rc genhtml_branch_coverage=1 00:32:27.927 --rc genhtml_function_coverage=1 00:32:27.927 --rc genhtml_legend=1 00:32:27.927 --rc geninfo_all_blocks=1 00:32:27.927 --rc geninfo_unexecuted_blocks=1 00:32:27.927 00:32:27.927 ' 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:27.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.927 --rc genhtml_branch_coverage=1 00:32:27.927 --rc genhtml_function_coverage=1 00:32:27.927 --rc genhtml_legend=1 00:32:27.927 --rc geninfo_all_blocks=1 00:32:27.927 --rc geninfo_unexecuted_blocks=1 00:32:27.927 00:32:27.927 ' 00:32:27.927 17:29:04 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:27.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.927 --rc genhtml_branch_coverage=1 00:32:27.927 --rc genhtml_function_coverage=1 00:32:27.927 --rc genhtml_legend=1 00:32:27.927 --rc geninfo_all_blocks=1 00:32:27.927 --rc geninfo_unexecuted_blocks=1 00:32:27.927 00:32:27.927 ' 00:32:27.927 17:29:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:27.927 17:29:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:27.927 17:29:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:27.927 17:29:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:27.927 17:29:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.927 17:29:04 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.928 17:29:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.928 17:29:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.928 17:29:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.928 17:29:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:32:27.928 17:29:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:27.928 17:29:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:32:27.928 17:29:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:27.928 17:29:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:32:27.928 17:29:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:27.928 17:29:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:27.928 17:29:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:27.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:28.188 Waiting for block devices as requested 00:32:28.188 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.188 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.448 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.448 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:33.723 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:33.723 17:29:10 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:33.723 17:29:10 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:33.723 17:29:10 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:33.723 17:29:10 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:33.723 17:29:10 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:33.723 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:33.724 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.725 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.726 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.727 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.728 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:33.729 17:29:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:33.729 17:29:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:33.729 17:29:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:33.729 17:29:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.729 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:33.730 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.731 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:32:33.732 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.733 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:33.734 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:34.001 17:29:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:34.002 17:29:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:34.002 17:29:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:34.002 17:29:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:34.002 17:29:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.002 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:34.003 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:32:34.004 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:32:34.005 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:32:34.006 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.007 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.008 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.009 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.010 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.011 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:34.012 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.013 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:34.014 17:29:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:34.014 17:29:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:34.014 17:29:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:34.014 17:29:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.014 17:29:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:34.275 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.276 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.277 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:32:34.278 17:29:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:32:34.278 17:29:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:32:34.278 17:29:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:34.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:35.810 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.810 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.810 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.810 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.810 17:29:13 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:35.810 17:29:13 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.810 17:29:13 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.810 17:29:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:35.810 ************************************ 00:32:35.810 START TEST nvme_simple_copy 00:32:35.810 ************************************ 00:32:35.810 17:29:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:36.071 Initializing NVMe Controllers 00:32:36.071 Attaching to 0000:00:10.0 00:32:36.071 Controller supports SCC. Attached to 0000:00:10.0 00:32:36.071 Namespace ID: 1 size: 6GB 00:32:36.071 Initialization complete. 00:32:36.071 00:32:36.071 Controller QEMU NVMe Ctrl (12340 ) 00:32:36.071 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:36.071 Namespace Block Size:4096 00:32:36.071 Writing LBAs 0 to 63 with Random Data 00:32:36.071 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:36.071 LBAs matching Written Data: 64 00:32:36.071 00:32:36.071 real 0m0.302s 00:32:36.071 user 0m0.117s 00:32:36.071 sys 0m0.083s 00:32:36.071 17:29:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.071 17:29:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:32:36.071 ************************************ 00:32:36.071 END TEST nvme_simple_copy 00:32:36.071 ************************************ 00:32:36.071 ************************************ 00:32:36.071 END TEST nvme_scc 00:32:36.071 ************************************ 00:32:36.071 00:32:36.071 real 0m8.864s 00:32:36.071 user 0m1.566s 00:32:36.071 sys 0m2.302s 00:32:36.071 17:29:13 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.071 17:29:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:36.071 17:29:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:32:36.071 17:29:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:32:36.071 17:29:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:32:36.071 17:29:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:32:36.071 17:29:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:32:36.071 17:29:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:36.071 17:29:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.071 17:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:36.071 ************************************ 00:32:36.071 START TEST nvme_fdp 00:32:36.071 ************************************ 00:32:36.071 17:29:13 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:32:36.331 * Looking for test storage... 00:32:36.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.331 --rc genhtml_branch_coverage=1 00:32:36.331 --rc genhtml_function_coverage=1 00:32:36.331 --rc genhtml_legend=1 00:32:36.331 --rc geninfo_all_blocks=1 00:32:36.331 --rc geninfo_unexecuted_blocks=1 00:32:36.331 00:32:36.331 ' 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.331 --rc genhtml_branch_coverage=1 00:32:36.331 --rc genhtml_function_coverage=1 00:32:36.331 --rc genhtml_legend=1 00:32:36.331 --rc geninfo_all_blocks=1 00:32:36.331 --rc geninfo_unexecuted_blocks=1 00:32:36.331 00:32:36.331 ' 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.331 --rc genhtml_branch_coverage=1 00:32:36.331 --rc genhtml_function_coverage=1 00:32:36.331 --rc genhtml_legend=1 00:32:36.331 --rc geninfo_all_blocks=1 00:32:36.331 --rc geninfo_unexecuted_blocks=1 00:32:36.331 00:32:36.331 ' 00:32:36.331 17:29:13 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.331 --rc genhtml_branch_coverage=1 00:32:36.331 --rc genhtml_function_coverage=1 00:32:36.331 --rc genhtml_legend=1 00:32:36.331 --rc geninfo_all_blocks=1 00:32:36.331 --rc geninfo_unexecuted_blocks=1 00:32:36.331 00:32:36.331 ' 00:32:36.331 17:29:13 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:36.331 17:29:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:36.331 17:29:13 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:36.331 17:29:13 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:36.331 17:29:13 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.331 17:29:13 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.331 17:29:13 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.331 17:29:13 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.332 17:29:13 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.332 17:29:13 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:32:36.332 17:29:13 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:36.332 17:29:13 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:32:36.332 17:29:13 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:36.332 17:29:13 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:36.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:37.160 Waiting for block devices as requested 00:32:37.160 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:37.420 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:37.420 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:37.420 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:42.713 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:42.713 17:29:19 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:42.713 17:29:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:42.713 17:29:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:42.713 17:29:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:42.713 17:29:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:42.713 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:42.714 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.715 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:32:42.716 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.717 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.718 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:42.719 17:29:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:42.719 17:29:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:42.720 17:29:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:42.720 17:29:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:42.720 17:29:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.720 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.721 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:42.722 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:32:42.723 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.724 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:42.725 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:42.726 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.990 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:42.991 17:29:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:42.991 17:29:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:42.991 17:29:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:42.991 17:29:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:42.991 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.992 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.993 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.994 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.995 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.996 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:32:42.997 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:42.998 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:42.999 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:43.000 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.001 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:43.002 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:43.003 17:29:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:43.003 17:29:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:43.003 17:29:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:43.003 17:29:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.003 17:29:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:43.264 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.265 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:43.266 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:32:43.267 17:29:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:32:43.267 17:29:20 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:32:43.267 17:29:20 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:43.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:44.795 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:44.795 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:44.795 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:44.795 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:44.795 17:29:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:44.795 17:29:22 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:44.795 17:29:22 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.795 17:29:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:44.795 ************************************ 00:32:44.795 START TEST nvme_flexible_data_placement 00:32:44.795 ************************************ 00:32:44.795 17:29:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:45.054 Initializing NVMe Controllers 00:32:45.054 Attaching to 0000:00:13.0 00:32:45.054 Controller supports FDP Attached to 0000:00:13.0 00:32:45.054 Namespace ID: 1 Endurance Group ID: 1 00:32:45.054 Initialization complete. 00:32:45.054 00:32:45.054 ================================== 00:32:45.054 == FDP tests for Namespace: #01 == 00:32:45.054 ================================== 00:32:45.054 00:32:45.054 Get Feature: FDP: 00:32:45.054 ================= 00:32:45.054 Enabled: Yes 00:32:45.054 FDP configuration Index: 0 00:32:45.054 00:32:45.054 FDP configurations log page 00:32:45.054 =========================== 00:32:45.054 Number of FDP configurations: 1 00:32:45.054 Version: 0 00:32:45.054 Size: 112 00:32:45.054 FDP Configuration Descriptor: 0 00:32:45.054 Descriptor Size: 96 00:32:45.054 Reclaim Group Identifier format: 2 00:32:45.054 FDP Volatile Write Cache: Not Present 00:32:45.054 FDP Configuration: Valid 00:32:45.054 Vendor Specific Size: 0 00:32:45.054 Number of Reclaim Groups: 2 00:32:45.054 Number of Recalim Unit Handles: 8 00:32:45.054 Max Placement Identifiers: 128 00:32:45.054 Number of Namespaces Suppprted: 256 00:32:45.054 Reclaim unit Nominal Size: 6000000 bytes 00:32:45.054 Estimated Reclaim Unit Time Limit: Not Reported 00:32:45.054 RUH Desc #000: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #001: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #002: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #003: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #004: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #005: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #006: RUH Type: Initially Isolated 00:32:45.054 RUH Desc #007: RUH Type: Initially Isolated 00:32:45.054 00:32:45.054 FDP reclaim unit handle usage log page 00:32:45.054 ====================================== 00:32:45.054 Number of Reclaim Unit Handles: 8 00:32:45.054 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:45.054 RUH Usage Desc #001: RUH Attributes: Unused 00:32:45.054 RUH Usage Desc #002: RUH Attributes: Unused 00:32:45.054 RUH Usage Desc #003: RUH Attributes: Unused 00:32:45.054 RUH Usage Desc #004: RUH Attributes: Unused 00:32:45.054 RUH Usage Desc #005: RUH Attributes: Unused 00:32:45.054 RUH Usage Desc #006: RUH Attributes: Unused 00:32:45.054 RUH Usage Desc #007: RUH Attributes: Unused 00:32:45.054 00:32:45.054 FDP statistics log page 00:32:45.054 ======================= 00:32:45.054 Host bytes with metadata written: 854192128 00:32:45.054 Media bytes with metadata written: 856375296 00:32:45.054 Media bytes erased: 0 00:32:45.054 00:32:45.054 FDP Reclaim unit handle status 00:32:45.054 ============================== 00:32:45.054 Number of RUHS descriptors: 2 00:32:45.054 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003161 00:32:45.054 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:32:45.054 00:32:45.054 FDP write on placement id: 0 success 00:32:45.054 00:32:45.054 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:32:45.054 00:32:45.054 IO mgmt send: RUH update for Placement ID: #0 Success 00:32:45.054 00:32:45.054 Get Feature: FDP Events for Placement handle: #0 00:32:45.054 ======================== 00:32:45.054 Number of FDP Events: 6 00:32:45.054 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:32:45.054 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:32:45.054 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:32:45.054 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:32:45.054 FDP Event: #4 Type: Media Reallocated Enabled: No 00:32:45.054 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:32:45.054 00:32:45.054 FDP events log page 00:32:45.054 =================== 00:32:45.054 Number of FDP events: 1 00:32:45.054 FDP Event #0: 00:32:45.054 Event Type: RU Not Written to Capacity 00:32:45.054 Placement Identifier: Valid 00:32:45.054 NSID: Valid 00:32:45.054 Location: Valid 00:32:45.055 Placement Identifier: 0 00:32:45.055 Event Timestamp: 8 00:32:45.055 Namespace Identifier: 1 00:32:45.055 Reclaim Group Identifier: 0 00:32:45.055 Reclaim Unit Handle Identifier: 0 00:32:45.055 00:32:45.055 FDP test passed 00:32:45.055 00:32:45.055 real 0m0.293s 00:32:45.055 user 0m0.093s 00:32:45.055 sys 0m0.097s 00:32:45.055 17:29:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.055 17:29:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:32:45.055 ************************************ 00:32:45.055 END TEST nvme_flexible_data_placement 00:32:45.055 ************************************ 00:32:45.055 00:32:45.055 real 0m8.958s 00:32:45.055 user 0m1.591s 00:32:45.055 sys 0m2.402s 00:32:45.055 17:29:22 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.055 ************************************ 00:32:45.055 END TEST nvme_fdp 00:32:45.055 ************************************ 00:32:45.055 17:29:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:45.314 17:29:22 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:32:45.314 17:29:22 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:45.314 17:29:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:45.314 17:29:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.314 17:29:22 -- common/autotest_common.sh@10 -- # set +x 00:32:45.314 ************************************ 00:32:45.314 START TEST nvme_rpc 00:32:45.314 ************************************ 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:45.314 * Looking for test storage... 00:32:45.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.314 17:29:22 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.314 --rc genhtml_branch_coverage=1 00:32:45.314 --rc genhtml_function_coverage=1 00:32:45.314 --rc genhtml_legend=1 00:32:45.314 --rc geninfo_all_blocks=1 00:32:45.314 --rc geninfo_unexecuted_blocks=1 00:32:45.314 00:32:45.314 ' 00:32:45.314 17:29:22 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.314 --rc genhtml_branch_coverage=1 00:32:45.314 --rc genhtml_function_coverage=1 00:32:45.314 --rc genhtml_legend=1 00:32:45.314 --rc geninfo_all_blocks=1 00:32:45.315 --rc geninfo_unexecuted_blocks=1 00:32:45.315 00:32:45.315 ' 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.315 --rc genhtml_branch_coverage=1 00:32:45.315 --rc genhtml_function_coverage=1 00:32:45.315 --rc genhtml_legend=1 00:32:45.315 --rc geninfo_all_blocks=1 00:32:45.315 --rc geninfo_unexecuted_blocks=1 00:32:45.315 00:32:45.315 ' 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.315 --rc genhtml_branch_coverage=1 00:32:45.315 --rc genhtml_function_coverage=1 00:32:45.315 --rc genhtml_legend=1 00:32:45.315 --rc geninfo_all_blocks=1 00:32:45.315 --rc geninfo_unexecuted_blocks=1 00:32:45.315 00:32:45.315 ' 00:32:45.315 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:45.315 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:45.315 17:29:22 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:32:45.574 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:32:45.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.574 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67643 00:32:45.574 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:45.574 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67643 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67643 ']' 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.574 17:29:22 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.574 17:29:22 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:45.574 [2024-11-26 17:29:22.964090] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:45.574 [2024-11-26 17:29:22.964237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67643 ] 00:32:45.833 [2024-11-26 17:29:23.148922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:46.093 [2024-11-26 17:29:23.302759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.093 [2024-11-26 17:29:23.302805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.031 17:29:24 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.031 17:29:24 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:32:47.031 17:29:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:32:47.291 Nvme0n1 00:32:47.291 17:29:24 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:47.291 17:29:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:47.551 request: 00:32:47.551 { 00:32:47.551 "bdev_name": "Nvme0n1", 00:32:47.551 "filename": "non_existing_file", 00:32:47.551 "method": "bdev_nvme_apply_firmware", 00:32:47.551 "req_id": 1 00:32:47.551 } 00:32:47.551 Got JSON-RPC error response 00:32:47.551 response: 00:32:47.551 { 00:32:47.551 "code": -32603, 00:32:47.551 "message": "open file failed." 00:32:47.551 } 00:32:47.551 17:29:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:47.551 17:29:24 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:47.551 17:29:24 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:47.811 17:29:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:47.811 17:29:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67643 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67643 ']' 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67643 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67643 00:32:47.811 killing process with pid 67643 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67643' 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67643 00:32:47.811 17:29:25 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67643 00:32:51.183 ************************************ 00:32:51.183 END TEST nvme_rpc 00:32:51.183 ************************************ 00:32:51.183 00:32:51.183 real 0m5.410s 00:32:51.183 user 0m9.882s 00:32:51.183 sys 0m0.946s 00:32:51.183 17:29:27 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.183 17:29:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:51.183 17:29:27 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:51.183 17:29:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:51.183 17:29:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.183 17:29:27 -- common/autotest_common.sh@10 -- # set +x 00:32:51.183 ************************************ 00:32:51.183 START TEST nvme_rpc_timeouts 00:32:51.183 ************************************ 00:32:51.183 17:29:27 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:51.183 * Looking for test storage... 00:32:51.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.183 17:29:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:51.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.183 --rc genhtml_branch_coverage=1 00:32:51.183 --rc genhtml_function_coverage=1 00:32:51.183 --rc genhtml_legend=1 00:32:51.183 --rc geninfo_all_blocks=1 00:32:51.183 --rc geninfo_unexecuted_blocks=1 00:32:51.183 00:32:51.183 ' 00:32:51.183 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:51.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.183 --rc genhtml_branch_coverage=1 00:32:51.183 --rc genhtml_function_coverage=1 00:32:51.183 --rc genhtml_legend=1 00:32:51.183 --rc geninfo_all_blocks=1 00:32:51.184 --rc geninfo_unexecuted_blocks=1 00:32:51.184 00:32:51.184 ' 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:51.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.184 --rc genhtml_branch_coverage=1 00:32:51.184 --rc genhtml_function_coverage=1 00:32:51.184 --rc genhtml_legend=1 00:32:51.184 --rc geninfo_all_blocks=1 00:32:51.184 --rc geninfo_unexecuted_blocks=1 00:32:51.184 00:32:51.184 ' 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:51.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.184 --rc genhtml_branch_coverage=1 00:32:51.184 --rc genhtml_function_coverage=1 00:32:51.184 --rc genhtml_legend=1 00:32:51.184 --rc geninfo_all_blocks=1 00:32:51.184 --rc geninfo_unexecuted_blocks=1 00:32:51.184 00:32:51.184 ' 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67726 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67726 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67758 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:51.184 17:29:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67758 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67758 ']' 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.184 17:29:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:51.184 [2024-11-26 17:29:28.301038] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:51.184 [2024-11-26 17:29:28.301261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67758 ] 00:32:51.184 [2024-11-26 17:29:28.485637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:51.443 [2024-11-26 17:29:28.634620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.443 [2024-11-26 17:29:28.634679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.380 17:29:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.380 17:29:29 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:32:52.380 17:29:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:52.380 Checking default timeout settings: 00:32:52.380 17:29:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:52.977 Making settings changes with rpc: 00:32:52.977 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:52.977 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:52.977 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:52.977 Check default vs. modified settings: 00:32:52.977 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:53.550 Setting action_on_timeout is changed as expected. 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:53.550 Setting timeout_us is changed as expected. 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:53.550 Setting timeout_admin_us is changed as expected. 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67726 /tmp/settings_modified_67726 00:32:53.550 17:29:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67758 00:32:53.550 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67758 ']' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67758 00:32:53.550 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:32:53.550 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.550 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67758 00:32:53.550 killing process with pid 67758 00:32:53.551 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:53.551 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:53.551 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67758' 00:32:53.551 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67758 00:32:53.551 17:29:30 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67758 00:32:56.840 RPC TIMEOUT SETTING TEST PASSED. 00:32:56.840 17:29:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:56.840 00:32:56.840 real 0m5.756s 00:32:56.840 user 0m10.780s 00:32:56.840 sys 0m0.938s 00:32:56.840 ************************************ 00:32:56.840 END TEST nvme_rpc_timeouts 00:32:56.840 ************************************ 00:32:56.840 17:29:33 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.840 17:29:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:56.840 17:29:33 -- spdk/autotest.sh@239 -- # uname -s 00:32:56.840 17:29:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:32:56.840 17:29:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:56.840 17:29:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:56.840 17:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.840 17:29:33 -- common/autotest_common.sh@10 -- # set +x 00:32:56.840 ************************************ 00:32:56.840 START TEST sw_hotplug 00:32:56.840 ************************************ 00:32:56.840 17:29:33 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:56.840 * Looking for test storage... 00:32:56.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:56.840 17:29:33 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:56.840 17:29:33 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:32:56.840 17:29:33 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:56.840 17:29:34 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:32:56.840 17:29:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.841 17:29:34 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:32:56.841 17:29:34 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.841 17:29:34 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.841 --rc genhtml_branch_coverage=1 00:32:56.841 --rc genhtml_function_coverage=1 00:32:56.841 --rc genhtml_legend=1 00:32:56.841 --rc geninfo_all_blocks=1 00:32:56.841 --rc geninfo_unexecuted_blocks=1 00:32:56.841 00:32:56.841 ' 00:32:56.841 17:29:34 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.841 --rc genhtml_branch_coverage=1 00:32:56.841 --rc genhtml_function_coverage=1 00:32:56.841 --rc genhtml_legend=1 00:32:56.841 --rc geninfo_all_blocks=1 00:32:56.841 --rc geninfo_unexecuted_blocks=1 00:32:56.841 00:32:56.841 ' 00:32:56.841 17:29:34 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.841 --rc genhtml_branch_coverage=1 00:32:56.841 --rc genhtml_function_coverage=1 00:32:56.841 --rc genhtml_legend=1 00:32:56.841 --rc geninfo_all_blocks=1 00:32:56.841 --rc geninfo_unexecuted_blocks=1 00:32:56.841 00:32:56.841 ' 00:32:56.841 17:29:34 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.841 --rc genhtml_branch_coverage=1 00:32:56.841 --rc genhtml_function_coverage=1 00:32:56.841 --rc genhtml_legend=1 00:32:56.841 --rc geninfo_all_blocks=1 00:32:56.841 --rc geninfo_unexecuted_blocks=1 00:32:56.841 00:32:56.841 ' 00:32:56.841 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:57.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:57.389 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:57.389 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:57.389 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:57.389 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:57.389 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:32:57.389 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:32:57.389 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:32:57.389 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@233 -- # local class 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:32:57.389 17:29:34 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:57.648 17:29:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:32:57.649 17:29:34 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:57.649 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:32:57.649 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:32:57.649 17:29:34 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:58.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:58.477 Waiting for block devices as requested 00:32:58.477 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:58.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:58.477 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:58.736 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:04.014 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:04.014 17:29:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:33:04.014 17:29:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:04.274 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:33:04.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:04.534 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:33:04.794 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:33:05.053 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:05.053 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:05.053 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:33:05.053 17:29:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68649 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:33:05.313 17:29:42 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:33:05.313 17:29:42 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:33:05.313 17:29:42 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:33:05.313 17:29:42 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:33:05.313 17:29:42 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:05.313 17:29:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:33:05.572 Initializing NVMe Controllers 00:33:05.572 Attaching to 0000:00:10.0 00:33:05.572 Attaching to 0000:00:11.0 00:33:05.572 Attached to 0000:00:11.0 00:33:05.572 Attached to 0000:00:10.0 00:33:05.572 Initialization complete. Starting I/O... 00:33:05.572 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:33:05.572 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:33:05.572 00:33:06.508 QEMU NVMe Ctrl (12341 ): 1548 I/Os completed (+1548) 00:33:06.508 QEMU NVMe Ctrl (12340 ): 1582 I/Os completed (+1582) 00:33:06.508 00:33:07.445 QEMU NVMe Ctrl (12341 ): 3616 I/Os completed (+2068) 00:33:07.445 QEMU NVMe Ctrl (12340 ): 3714 I/Os completed (+2132) 00:33:07.445 00:33:08.824 QEMU NVMe Ctrl (12341 ): 5700 I/Os completed (+2084) 00:33:08.824 QEMU NVMe Ctrl (12340 ): 5873 I/Os completed (+2159) 00:33:08.824 00:33:09.766 QEMU NVMe Ctrl (12341 ): 7912 I/Os completed (+2212) 00:33:09.766 QEMU NVMe Ctrl (12340 ): 8086 I/Os completed (+2213) 00:33:09.766 00:33:10.703 QEMU NVMe Ctrl (12341 ): 10000 I/Os completed (+2088) 00:33:10.703 QEMU NVMe Ctrl (12340 ): 10174 I/Os completed (+2088) 00:33:10.703 00:33:11.270 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:11.270 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:11.270 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:11.271 [2024-11-26 17:29:48.645946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:11.271 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:11.271 [2024-11-26 17:29:48.648393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.648506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.648531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.648554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:11.271 [2024-11-26 17:29:48.651635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.651710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.651727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.651744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:33:11.271 EAL: Scan for (pci) bus failed. 00:33:11.271 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:11.271 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:11.271 [2024-11-26 17:29:48.682995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:11.271 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:11.271 [2024-11-26 17:29:48.684566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.684637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.684667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.684686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:11.271 [2024-11-26 17:29:48.687524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.687570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.687595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 [2024-11-26 17:29:48.687623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:11.271 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:11.271 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:11.271 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:11.271 EAL: Scan for (pci) bus failed. 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:11.530 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:11.530 Attaching to 0000:00:10.0 00:33:11.530 Attached to 0000:00:10.0 00:33:11.530 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:11.811 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:11.811 17:29:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:11.811 Attaching to 0000:00:11.0 00:33:11.811 Attached to 0000:00:11.0 00:33:12.749 QEMU NVMe Ctrl (12340 ): 2180 I/Os completed (+2180) 00:33:12.749 QEMU NVMe Ctrl (12341 ): 1956 I/Os completed (+1956) 00:33:12.749 00:33:13.695 QEMU NVMe Ctrl (12340 ): 4344 I/Os completed (+2164) 00:33:13.695 QEMU NVMe Ctrl (12341 ): 4122 I/Os completed (+2166) 00:33:13.695 00:33:14.632 QEMU NVMe Ctrl (12340 ): 6500 I/Os completed (+2156) 00:33:14.632 QEMU NVMe Ctrl (12341 ): 6278 I/Os completed (+2156) 00:33:14.632 00:33:15.569 QEMU NVMe Ctrl (12340 ): 8643 I/Os completed (+2143) 00:33:15.569 QEMU NVMe Ctrl (12341 ): 8417 I/Os completed (+2139) 00:33:15.569 00:33:16.508 QEMU NVMe Ctrl (12340 ): 10833 I/Os completed (+2190) 00:33:16.508 QEMU NVMe Ctrl (12341 ): 10642 I/Os completed (+2225) 00:33:16.508 00:33:17.444 QEMU NVMe Ctrl (12340 ): 13073 I/Os completed (+2240) 00:33:17.444 QEMU NVMe Ctrl (12341 ): 12882 I/Os completed (+2240) 00:33:17.444 00:33:18.430 QEMU NVMe Ctrl (12340 ): 15345 I/Os completed (+2272) 00:33:18.430 QEMU NVMe Ctrl (12341 ): 15155 I/Os completed (+2273) 00:33:18.430 00:33:19.806 QEMU NVMe Ctrl (12340 ): 17493 I/Os completed (+2148) 00:33:19.806 QEMU NVMe Ctrl (12341 ): 17318 I/Os completed (+2163) 00:33:19.806 00:33:20.763 QEMU NVMe Ctrl (12340 ): 19672 I/Os completed (+2179) 00:33:20.763 QEMU NVMe Ctrl (12341 ): 19499 I/Os completed (+2181) 00:33:20.763 00:33:21.702 QEMU NVMe Ctrl (12340 ): 21876 I/Os completed (+2204) 00:33:21.702 QEMU NVMe Ctrl (12341 ): 21704 I/Os completed (+2205) 00:33:21.702 00:33:22.695 QEMU NVMe Ctrl (12340 ): 24117 I/Os completed (+2241) 00:33:22.695 QEMU NVMe Ctrl (12341 ): 23913 I/Os completed (+2209) 00:33:22.695 00:33:23.633 QEMU NVMe Ctrl (12340 ): 26335 I/Os completed (+2218) 00:33:23.633 QEMU NVMe Ctrl (12341 ): 26397 I/Os completed (+2484) 00:33:23.633 00:33:23.633 17:30:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:23.633 17:30:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:23.633 17:30:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:23.633 17:30:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:23.633 [2024-11-26 17:30:00.989096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:23.633 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:23.633 [2024-11-26 17:30:00.991513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:00.991627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:00.991662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:00.991697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:23.633 [2024-11-26 17:30:00.995338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:00.995406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:00.995428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:00.995449] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:23.633 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:23.633 [2024-11-26 17:30:01.030112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:23.633 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:23.633 [2024-11-26 17:30:01.031657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:01.031715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:01.031745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:01.031765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:23.633 [2024-11-26 17:30:01.034637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:01.034680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:01.034702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 [2024-11-26 17:30:01.034724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:23.633 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:23.633 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:23.893 Attaching to 0000:00:10.0 00:33:23.893 Attached to 0000:00:10.0 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:23.893 17:30:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:23.893 Attaching to 0000:00:11.0 00:33:23.893 Attached to 0000:00:11.0 00:33:24.461 QEMU NVMe Ctrl (12340 ): 1370 I/Os completed (+1370) 00:33:24.461 QEMU NVMe Ctrl (12341 ): 1168 I/Os completed (+1168) 00:33:24.461 00:33:25.839 QEMU NVMe Ctrl (12340 ): 3322 I/Os completed (+1952) 00:33:25.839 QEMU NVMe Ctrl (12341 ): 3120 I/Os completed (+1952) 00:33:25.839 00:33:26.451 QEMU NVMe Ctrl (12340 ): 5438 I/Os completed (+2116) 00:33:26.451 QEMU NVMe Ctrl (12341 ): 5244 I/Os completed (+2124) 00:33:26.451 00:33:27.824 QEMU NVMe Ctrl (12340 ): 7758 I/Os completed (+2320) 00:33:27.824 QEMU NVMe Ctrl (12341 ): 7564 I/Os completed (+2320) 00:33:27.824 00:33:28.761 QEMU NVMe Ctrl (12340 ): 10098 I/Os completed (+2340) 00:33:28.761 QEMU NVMe Ctrl (12341 ): 9904 I/Os completed (+2340) 00:33:28.761 00:33:29.756 QEMU NVMe Ctrl (12340 ): 12434 I/Os completed (+2336) 00:33:29.756 QEMU NVMe Ctrl (12341 ): 12240 I/Os completed (+2336) 00:33:29.756 00:33:30.695 QEMU NVMe Ctrl (12340 ): 14682 I/Os completed (+2248) 00:33:30.695 QEMU NVMe Ctrl (12341 ): 14488 I/Os completed (+2248) 00:33:30.695 00:33:31.633 QEMU NVMe Ctrl (12340 ): 17018 I/Os completed (+2336) 00:33:31.633 QEMU NVMe Ctrl (12341 ): 16826 I/Os completed (+2338) 00:33:31.633 00:33:32.572 QEMU NVMe Ctrl (12340 ): 19394 I/Os completed (+2376) 00:33:32.572 QEMU NVMe Ctrl (12341 ): 19202 I/Os completed (+2376) 00:33:32.572 00:33:33.524 QEMU NVMe Ctrl (12340 ): 21770 I/Os completed (+2376) 00:33:33.524 QEMU NVMe Ctrl (12341 ): 21574 I/Os completed (+2372) 00:33:33.524 00:33:34.462 QEMU NVMe Ctrl (12340 ): 24158 I/Os completed (+2388) 00:33:34.462 QEMU NVMe Ctrl (12341 ): 23962 I/Os completed (+2388) 00:33:34.462 00:33:35.400 QEMU NVMe Ctrl (12340 ): 26526 I/Os completed (+2368) 00:33:35.400 QEMU NVMe Ctrl (12341 ): 26331 I/Os completed (+2369) 00:33:35.400 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:35.970 [2024-11-26 17:30:13.307335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:35.970 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:35.970 [2024-11-26 17:30:13.309045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.309208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.309273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.309331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:35.970 [2024-11-26 17:30:13.312539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.312648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.312710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.312775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:33:35.970 EAL: Scan for (pci) bus failed. 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:35.970 [2024-11-26 17:30:13.343414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:35.970 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:35.970 [2024-11-26 17:30:13.348036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.348100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.348131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.348154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:35.970 [2024-11-26 17:30:13.350815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.350859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.350884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 [2024-11-26 17:30:13.350899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:35.970 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:36.230 Attaching to 0000:00:10.0 00:33:36.230 Attached to 0000:00:10.0 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:36.230 17:30:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:36.230 Attaching to 0000:00:11.0 00:33:36.230 Attached to 0000:00:11.0 00:33:36.230 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:36.230 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:36.230 [2024-11-26 17:30:13.630395] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:33:48.475 17:30:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:48.475 17:30:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:48.475 17:30:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.98 00:33:48.475 17:30:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.98 00:33:48.475 17:30:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:33:48.475 17:30:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.98 00:33:48.475 17:30:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.98 2 00:33:48.475 remove_attach_helper took 42.98s to complete (handling 2 nvme drive(s)) 17:30:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:33:55.072 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68649 00:33:55.072 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68649) - No such process 00:33:55.072 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68649 00:33:55.072 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:33:55.072 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:33:55.073 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:33:55.073 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69182 00:33:55.073 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:55.073 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:33:55.073 17:30:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69182 00:33:55.073 17:30:31 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69182 ']' 00:33:55.073 17:30:31 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.073 17:30:31 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.073 17:30:31 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.073 17:30:31 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.073 17:30:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:55.073 [2024-11-26 17:30:31.757173] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:55.073 [2024-11-26 17:30:31.757961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69182 ] 00:33:55.073 [2024-11-26 17:30:31.938987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.073 [2024-11-26 17:30:32.090370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:33:56.011 17:30:33 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:56.011 17:30:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:02.651 17:30:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.651 17:30:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:02.651 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:02.651 [2024-11-26 17:30:39.278409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:02.651 [2024-11-26 17:30:39.280865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.651 [2024-11-26 17:30:39.280922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.651 [2024-11-26 17:30:39.280944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.651 [2024-11-26 17:30:39.280971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.280986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.281015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 [2024-11-26 17:30:39.281026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.281039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.281049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 [2024-11-26 17:30:39.281067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.281078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.281091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 17:30:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:02.652 [2024-11-26 17:30:39.677658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:02.652 [2024-11-26 17:30:39.680308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.680353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.680388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 [2024-11-26 17:30:39.680416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.680432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.680444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 [2024-11-26 17:30:39.680461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.680474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.680487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 [2024-11-26 17:30:39.680497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.652 [2024-11-26 17:30:39.680510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.652 [2024-11-26 17:30:39.680521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:02.652 17:30:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.652 17:30:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:02.652 17:30:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:02.652 17:30:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:02.652 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:02.652 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:02.652 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:02.652 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:02.652 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:02.911 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:02.911 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:02.911 17:30:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:15.178 17:30:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.178 17:30:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:15.178 17:30:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:15.178 [2024-11-26 17:30:52.253633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:15.178 [2024-11-26 17:30:52.256476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.178 [2024-11-26 17:30:52.256526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.178 [2024-11-26 17:30:52.256543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.178 [2024-11-26 17:30:52.256572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.178 [2024-11-26 17:30:52.256586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.178 [2024-11-26 17:30:52.256599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.178 [2024-11-26 17:30:52.256625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.178 [2024-11-26 17:30:52.256656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.178 [2024-11-26 17:30:52.256667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.178 [2024-11-26 17:30:52.256681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.178 [2024-11-26 17:30:52.256691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.178 [2024-11-26 17:30:52.256704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:15.178 17:30:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.178 17:30:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:15.178 17:30:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:15.178 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:15.437 [2024-11-26 17:30:52.652858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:15.437 [2024-11-26 17:30:52.655514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.437 [2024-11-26 17:30:52.655595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.437 [2024-11-26 17:30:52.655632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.437 [2024-11-26 17:30:52.655659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.437 [2024-11-26 17:30:52.655674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.437 [2024-11-26 17:30:52.655684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.437 [2024-11-26 17:30:52.655699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.437 [2024-11-26 17:30:52.655710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.437 [2024-11-26 17:30:52.655723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.437 [2024-11-26 17:30:52.655735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.437 [2024-11-26 17:30:52.655748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.437 [2024-11-26 17:30:52.655758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:15.437 17:30:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.437 17:30:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:15.437 17:30:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:15.437 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:15.695 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:15.695 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:15.695 17:30:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:15.695 17:30:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:27.892 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:27.892 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:27.892 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:27.892 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:27.892 17:31:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.892 17:31:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:27.893 17:31:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:27.893 [2024-11-26 17:31:05.228850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:27.893 [2024-11-26 17:31:05.231827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:27.893 [2024-11-26 17:31:05.231879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.893 [2024-11-26 17:31:05.231897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.893 [2024-11-26 17:31:05.231928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:27.893 [2024-11-26 17:31:05.231945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.893 [2024-11-26 17:31:05.231963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.893 [2024-11-26 17:31:05.231975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:27.893 [2024-11-26 17:31:05.231989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.893 [2024-11-26 17:31:05.231999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.893 [2024-11-26 17:31:05.232012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:27.893 [2024-11-26 17:31:05.232022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.893 [2024-11-26 17:31:05.232035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:27.893 17:31:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.893 17:31:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:27.893 17:31:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:27.893 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:28.460 [2024-11-26 17:31:05.628104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:28.460 [2024-11-26 17:31:05.630890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.460 [2024-11-26 17:31:05.630939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.460 [2024-11-26 17:31:05.630959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.460 [2024-11-26 17:31:05.630985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.460 [2024-11-26 17:31:05.631001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.460 [2024-11-26 17:31:05.631012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.460 [2024-11-26 17:31:05.631025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.460 [2024-11-26 17:31:05.631035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.460 [2024-11-26 17:31:05.631052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.460 [2024-11-26 17:31:05.631064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.460 [2024-11-26 17:31:05.631094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.460 [2024-11-26 17:31:05.631104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:28.460 17:31:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.460 17:31:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:28.460 17:31:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:28.460 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:28.719 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:28.719 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:28.719 17:31:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:28.719 17:31:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.94 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.94 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.94 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.94 2 00:34:40.910 remove_attach_helper took 44.94s to complete (handling 2 nvme drive(s)) 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 17:31:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:34:40.911 17:31:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:34:40.911 17:31:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:47.478 17:31:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.478 17:31:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:47.478 [2024-11-26 17:31:24.259838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:47.478 [2024-11-26 17:31:24.261756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.261814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.261834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 [2024-11-26 17:31:24.261868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.261882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.261898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 [2024-11-26 17:31:24.261913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.261929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.261941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 [2024-11-26 17:31:24.261957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.261970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.262000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 17:31:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:47.478 [2024-11-26 17:31:24.659071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:47.478 [2024-11-26 17:31:24.661901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.661978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.662000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 [2024-11-26 17:31:24.662026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.662043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.662058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 [2024-11-26 17:31:24.662076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.662090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.662106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 [2024-11-26 17:31:24.662120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:47.478 [2024-11-26 17:31:24.662136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.478 [2024-11-26 17:31:24.662148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:47.478 17:31:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.478 17:31:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:47.478 17:31:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:47.478 17:31:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:47.738 17:31:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:00.035 17:31:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.035 17:31:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:00.035 17:31:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:00.035 17:31:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.035 17:31:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:00.035 [2024-11-26 17:31:37.235673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:00.035 [2024-11-26 17:31:37.237453] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.035 [2024-11-26 17:31:37.237515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.035 [2024-11-26 17:31:37.237531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.035 [2024-11-26 17:31:37.237561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.035 [2024-11-26 17:31:37.237571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.035 [2024-11-26 17:31:37.237583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.035 [2024-11-26 17:31:37.237593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.035 [2024-11-26 17:31:37.237604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.035 [2024-11-26 17:31:37.237626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.035 [2024-11-26 17:31:37.237640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.035 [2024-11-26 17:31:37.237648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.035 [2024-11-26 17:31:37.237660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.035 17:31:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:35:00.035 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:00.293 [2024-11-26 17:31:37.634945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:00.293 [2024-11-26 17:31:37.637003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.293 [2024-11-26 17:31:37.637048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.293 [2024-11-26 17:31:37.637067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.293 [2024-11-26 17:31:37.637094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.293 [2024-11-26 17:31:37.637111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.293 [2024-11-26 17:31:37.637120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.293 [2024-11-26 17:31:37.637133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.293 [2024-11-26 17:31:37.637142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.293 [2024-11-26 17:31:37.637154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.293 [2024-11-26 17:31:37.637163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:00.293 [2024-11-26 17:31:37.637174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.293 [2024-11-26 17:31:37.637183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:00.551 17:31:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.551 17:31:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:00.551 17:31:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:00.551 17:31:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:00.812 17:31:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:13.060 17:31:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.060 17:31:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:13.060 17:31:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:13.060 [2024-11-26 17:31:50.210892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:13.060 [2024-11-26 17:31:50.212842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.060 [2024-11-26 17:31:50.212901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.060 [2024-11-26 17:31:50.212920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.060 [2024-11-26 17:31:50.212966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.060 [2024-11-26 17:31:50.212979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.060 [2024-11-26 17:31:50.212993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.060 [2024-11-26 17:31:50.213005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.060 [2024-11-26 17:31:50.213024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.060 [2024-11-26 17:31:50.213035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.060 [2024-11-26 17:31:50.213049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.060 [2024-11-26 17:31:50.213060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.060 [2024-11-26 17:31:50.213073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:13.060 17:31:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.060 17:31:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:13.060 17:31:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:35:13.060 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:13.320 [2024-11-26 17:31:50.610145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:13.320 [2024-11-26 17:31:50.612766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.320 [2024-11-26 17:31:50.612830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.320 [2024-11-26 17:31:50.612853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.320 [2024-11-26 17:31:50.612881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.320 [2024-11-26 17:31:50.612899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.320 [2024-11-26 17:31:50.612911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.320 [2024-11-26 17:31:50.612927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.320 [2024-11-26 17:31:50.612938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.320 [2024-11-26 17:31:50.612952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.320 [2024-11-26 17:31:50.612975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:13.320 [2024-11-26 17:31:50.612993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.320 [2024-11-26 17:31:50.613004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.320 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:35:13.320 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:13.320 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:13.320 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:13.320 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:13.320 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:13.320 17:31:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.320 17:31:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:13.579 17:31:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.579 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:13.579 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:13.579 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:13.579 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:13.579 17:31:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:13.579 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:13.579 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:13.579 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:13.579 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:13.579 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:13.838 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:13.838 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:13.838 17:31:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.99 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.99 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.99 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.99 2 00:35:26.065 remove_attach_helper took 44.99s to complete (handling 2 nvme drive(s)) 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:35:26.065 17:32:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69182 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69182 ']' 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69182 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69182 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69182' 00:35:26.065 killing process with pid 69182 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69182 00:35:26.065 17:32:03 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69182 00:35:29.376 17:32:06 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:29.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:29.948 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:29.948 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:29.948 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:29.948 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:29.948 ************************************ 00:35:29.948 END TEST sw_hotplug 00:35:29.948 ************************************ 00:35:29.948 00:35:29.948 real 2m33.537s 00:35:29.948 user 1m53.913s 00:35:29.948 sys 0m19.458s 00:35:29.948 17:32:07 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.948 17:32:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:29.948 17:32:07 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:35:29.948 17:32:07 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:29.948 17:32:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:30.209 17:32:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.209 17:32:07 -- common/autotest_common.sh@10 -- # set +x 00:35:30.209 ************************************ 00:35:30.209 START TEST nvme_xnvme 00:35:30.209 ************************************ 00:35:30.209 17:32:07 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:30.209 * Looking for test storage... 00:35:30.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:30.209 17:32:07 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:30.209 17:32:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:35:30.209 17:32:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:30.209 17:32:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.209 17:32:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.210 17:32:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:30.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.210 --rc genhtml_branch_coverage=1 00:35:30.210 --rc genhtml_function_coverage=1 00:35:30.210 --rc genhtml_legend=1 00:35:30.210 --rc geninfo_all_blocks=1 00:35:30.210 --rc geninfo_unexecuted_blocks=1 00:35:30.210 00:35:30.210 ' 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:30.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.210 --rc genhtml_branch_coverage=1 00:35:30.210 --rc genhtml_function_coverage=1 00:35:30.210 --rc genhtml_legend=1 00:35:30.210 --rc geninfo_all_blocks=1 00:35:30.210 --rc geninfo_unexecuted_blocks=1 00:35:30.210 00:35:30.210 ' 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:30.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.210 --rc genhtml_branch_coverage=1 00:35:30.210 --rc genhtml_function_coverage=1 00:35:30.210 --rc genhtml_legend=1 00:35:30.210 --rc geninfo_all_blocks=1 00:35:30.210 --rc geninfo_unexecuted_blocks=1 00:35:30.210 00:35:30.210 ' 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:30.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.210 --rc genhtml_branch_coverage=1 00:35:30.210 --rc genhtml_function_coverage=1 00:35:30.210 --rc genhtml_legend=1 00:35:30.210 --rc geninfo_all_blocks=1 00:35:30.210 --rc geninfo_unexecuted_blocks=1 00:35:30.210 00:35:30.210 ' 00:35:30.210 17:32:07 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:35:30.210 17:32:07 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:35:30.210 17:32:07 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:35:30.210 17:32:07 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:35:30.211 17:32:07 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:35:30.211 17:32:07 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:30.211 17:32:07 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:35:30.472 17:32:07 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:35:30.473 17:32:07 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:35:30.473 17:32:07 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:35:30.473 #define SPDK_CONFIG_H 00:35:30.473 #define SPDK_CONFIG_AIO_FSDEV 1 00:35:30.473 #define SPDK_CONFIG_APPS 1 00:35:30.473 #define SPDK_CONFIG_ARCH native 00:35:30.473 #define SPDK_CONFIG_ASAN 1 00:35:30.473 #undef SPDK_CONFIG_AVAHI 00:35:30.473 #undef SPDK_CONFIG_CET 00:35:30.473 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:35:30.473 #define SPDK_CONFIG_COVERAGE 1 00:35:30.473 #define SPDK_CONFIG_CROSS_PREFIX 00:35:30.473 #undef SPDK_CONFIG_CRYPTO 00:35:30.473 #undef SPDK_CONFIG_CRYPTO_MLX5 00:35:30.473 #undef SPDK_CONFIG_CUSTOMOCF 00:35:30.473 #undef SPDK_CONFIG_DAOS 00:35:30.473 #define SPDK_CONFIG_DAOS_DIR 00:35:30.473 #define SPDK_CONFIG_DEBUG 1 00:35:30.473 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:35:30.473 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:30.473 #define SPDK_CONFIG_DPDK_INC_DIR 00:35:30.473 #define SPDK_CONFIG_DPDK_LIB_DIR 00:35:30.473 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:35:30.473 #undef SPDK_CONFIG_DPDK_UADK 00:35:30.473 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:30.473 #define SPDK_CONFIG_EXAMPLES 1 00:35:30.473 #undef SPDK_CONFIG_FC 00:35:30.473 #define SPDK_CONFIG_FC_PATH 00:35:30.473 #define SPDK_CONFIG_FIO_PLUGIN 1 00:35:30.473 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:35:30.473 #define SPDK_CONFIG_FSDEV 1 00:35:30.473 #undef SPDK_CONFIG_FUSE 00:35:30.473 #undef SPDK_CONFIG_FUZZER 00:35:30.473 #define SPDK_CONFIG_FUZZER_LIB 00:35:30.473 #undef SPDK_CONFIG_GOLANG 00:35:30.473 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:35:30.473 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:35:30.473 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:35:30.473 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:35:30.473 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:35:30.473 #undef SPDK_CONFIG_HAVE_LIBBSD 00:35:30.473 #undef SPDK_CONFIG_HAVE_LZ4 00:35:30.473 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:35:30.473 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:35:30.473 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:35:30.473 #define SPDK_CONFIG_IDXD 1 00:35:30.473 #define SPDK_CONFIG_IDXD_KERNEL 1 00:35:30.473 #undef SPDK_CONFIG_IPSEC_MB 00:35:30.473 #define SPDK_CONFIG_IPSEC_MB_DIR 00:35:30.473 #define SPDK_CONFIG_ISAL 1 00:35:30.473 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:35:30.473 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:35:30.473 #define SPDK_CONFIG_LIBDIR 00:35:30.473 #undef SPDK_CONFIG_LTO 00:35:30.473 #define SPDK_CONFIG_MAX_LCORES 128 00:35:30.473 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:35:30.473 #define SPDK_CONFIG_NVME_CUSE 1 00:35:30.473 #undef SPDK_CONFIG_OCF 00:35:30.473 #define SPDK_CONFIG_OCF_PATH 00:35:30.473 #define SPDK_CONFIG_OPENSSL_PATH 00:35:30.473 #undef SPDK_CONFIG_PGO_CAPTURE 00:35:30.473 #define SPDK_CONFIG_PGO_DIR 00:35:30.473 #undef SPDK_CONFIG_PGO_USE 00:35:30.473 #define SPDK_CONFIG_PREFIX /usr/local 00:35:30.473 #undef SPDK_CONFIG_RAID5F 00:35:30.473 #undef SPDK_CONFIG_RBD 00:35:30.473 #define SPDK_CONFIG_RDMA 1 00:35:30.473 #define SPDK_CONFIG_RDMA_PROV verbs 00:35:30.473 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:35:30.473 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:35:30.473 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:35:30.473 #define SPDK_CONFIG_SHARED 1 00:35:30.473 #undef SPDK_CONFIG_SMA 00:35:30.473 #define SPDK_CONFIG_TESTS 1 00:35:30.473 #undef SPDK_CONFIG_TSAN 00:35:30.473 #define SPDK_CONFIG_UBLK 1 00:35:30.473 #define SPDK_CONFIG_UBSAN 1 00:35:30.473 #undef SPDK_CONFIG_UNIT_TESTS 00:35:30.473 #undef SPDK_CONFIG_URING 00:35:30.473 #define SPDK_CONFIG_URING_PATH 00:35:30.473 #undef SPDK_CONFIG_URING_ZNS 00:35:30.473 #undef SPDK_CONFIG_USDT 00:35:30.473 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:35:30.473 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:35:30.473 #undef SPDK_CONFIG_VFIO_USER 00:35:30.473 #define SPDK_CONFIG_VFIO_USER_DIR 00:35:30.473 #define SPDK_CONFIG_VHOST 1 00:35:30.473 #define SPDK_CONFIG_VIRTIO 1 00:35:30.473 #undef SPDK_CONFIG_VTUNE 00:35:30.473 #define SPDK_CONFIG_VTUNE_DIR 00:35:30.473 #define SPDK_CONFIG_WERROR 1 00:35:30.473 #define SPDK_CONFIG_WPDK_DIR 00:35:30.473 #define SPDK_CONFIG_XNVME 1 00:35:30.473 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:35:30.473 17:32:07 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:30.473 17:32:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.473 17:32:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.473 17:32:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.473 17:32:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.473 17:32:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.473 17:32:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.473 17:32:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.473 17:32:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:30.473 17:32:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@68 -- # uname -s 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:35:30.473 17:32:07 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:35:30.473 17:32:07 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:35:30.474 17:32:07 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70535 ]] 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70535 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.l8np5S 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.l8np5S/tests/xnvme /tmp/spdk.l8np5S 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975437312 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592350720 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975437312 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592350720 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94073761792 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5629018112 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:35:30.475 * Looking for test storage... 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975437312 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:35:30.475 17:32:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:30.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.476 --rc genhtml_branch_coverage=1 00:35:30.476 --rc genhtml_function_coverage=1 00:35:30.476 --rc genhtml_legend=1 00:35:30.476 --rc geninfo_all_blocks=1 00:35:30.476 --rc geninfo_unexecuted_blocks=1 00:35:30.476 00:35:30.476 ' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.476 --rc genhtml_branch_coverage=1 00:35:30.476 --rc genhtml_function_coverage=1 00:35:30.476 --rc genhtml_legend=1 00:35:30.476 --rc geninfo_all_blocks=1 00:35:30.476 --rc geninfo_unexecuted_blocks=1 00:35:30.476 00:35:30.476 ' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.476 --rc genhtml_branch_coverage=1 00:35:30.476 --rc genhtml_function_coverage=1 00:35:30.476 --rc genhtml_legend=1 00:35:30.476 --rc geninfo_all_blocks=1 00:35:30.476 --rc geninfo_unexecuted_blocks=1 00:35:30.476 00:35:30.476 ' 00:35:30.476 17:32:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.476 --rc genhtml_branch_coverage=1 00:35:30.476 --rc genhtml_function_coverage=1 00:35:30.476 --rc genhtml_legend=1 00:35:30.476 --rc geninfo_all_blocks=1 00:35:30.476 --rc geninfo_unexecuted_blocks=1 00:35:30.476 00:35:30.476 ' 00:35:30.476 17:32:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:30.476 17:32:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.735 17:32:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.735 17:32:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.735 17:32:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.735 17:32:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.735 17:32:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.735 17:32:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.735 17:32:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:30.735 17:32:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:35:30.735 17:32:07 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:35:30.736 17:32:07 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:31.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:31.304 Waiting for block devices as requested 00:35:31.563 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:31.563 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:31.563 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:31.822 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.114 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:37.114 17:32:14 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:35:37.114 17:32:14 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:35:37.114 17:32:14 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:35:37.374 17:32:14 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:35:37.374 17:32:14 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:35:37.374 No valid GPT data, bailing 00:35:37.374 17:32:14 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:37.374 17:32:14 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:35:37.374 17:32:14 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:35:37.374 17:32:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:35:37.374 17:32:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:37.374 17:32:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.374 17:32:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:37.374 ************************************ 00:35:37.374 START TEST xnvme_rpc 00:35:37.374 ************************************ 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70932 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70932 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70932 ']' 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:37.374 17:32:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:37.634 [2024-11-26 17:32:14.835542] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:35:37.634 [2024-11-26 17:32:14.835698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:35:37.634 [2024-11-26 17:32:15.014742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.893 [2024-11-26 17:32:15.135133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:38.830 xnvme_bdev 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.830 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70932 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70932 ']' 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70932 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70932 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:39.088 killing process with pid 70932 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70932' 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70932 00:35:39.088 17:32:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70932 00:35:42.377 00:35:42.377 real 0m4.553s 00:35:42.377 user 0m4.477s 00:35:42.377 sys 0m0.667s 00:35:42.377 17:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.377 ************************************ 00:35:42.377 END TEST xnvme_rpc 00:35:42.377 17:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:42.377 ************************************ 00:35:42.377 17:32:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:35:42.377 17:32:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.377 17:32:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.377 17:32:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:42.377 ************************************ 00:35:42.377 START TEST xnvme_bdevperf 00:35:42.377 ************************************ 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:42.377 17:32:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:42.377 { 00:35:42.377 "subsystems": [ 00:35:42.377 { 00:35:42.377 "subsystem": "bdev", 00:35:42.377 "config": [ 00:35:42.377 { 00:35:42.377 "params": { 00:35:42.377 "io_mechanism": "libaio", 00:35:42.377 "conserve_cpu": false, 00:35:42.377 "filename": "/dev/nvme0n1", 00:35:42.377 "name": "xnvme_bdev" 00:35:42.377 }, 00:35:42.377 "method": "bdev_xnvme_create" 00:35:42.377 }, 00:35:42.377 { 00:35:42.377 "method": "bdev_wait_for_examine" 00:35:42.377 } 00:35:42.377 ] 00:35:42.377 } 00:35:42.377 ] 00:35:42.377 } 00:35:42.377 [2024-11-26 17:32:19.443352] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:35:42.377 [2024-11-26 17:32:19.443482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71017 ] 00:35:42.377 [2024-11-26 17:32:19.645357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.377 [2024-11-26 17:32:19.781157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.941 Running I/O for 5 seconds... 00:35:44.804 42764.00 IOPS, 167.05 MiB/s [2024-11-26T17:32:23.625Z] 40943.00 IOPS, 159.93 MiB/s [2024-11-26T17:32:24.562Z] 41896.67 IOPS, 163.66 MiB/s [2024-11-26T17:32:25.500Z] 38169.75 IOPS, 149.10 MiB/s 00:35:48.054 Latency(us) 00:35:48.054 [2024-11-26T17:32:25.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.054 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:35:48.054 xnvme_bdev : 5.00 38839.63 151.72 0.00 0.00 1643.98 177.97 10817.73 00:35:48.054 [2024-11-26T17:32:25.500Z] =================================================================================================================== 00:35:48.054 [2024-11-26T17:32:25.500Z] Total : 38839.63 151.72 0.00 0.00 1643.98 177.97 10817.73 00:35:49.434 17:32:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:49.434 17:32:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:35:49.434 17:32:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:35:49.434 17:32:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:49.434 17:32:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.434 { 00:35:49.434 "subsystems": [ 00:35:49.434 { 00:35:49.434 "subsystem": "bdev", 00:35:49.434 "config": [ 00:35:49.434 { 00:35:49.434 "params": { 00:35:49.434 "io_mechanism": "libaio", 00:35:49.434 "conserve_cpu": false, 00:35:49.434 "filename": "/dev/nvme0n1", 00:35:49.434 "name": "xnvme_bdev" 00:35:49.434 }, 00:35:49.434 "method": "bdev_xnvme_create" 00:35:49.434 }, 00:35:49.434 { 00:35:49.434 "method": "bdev_wait_for_examine" 00:35:49.434 } 00:35:49.434 ] 00:35:49.434 } 00:35:49.434 ] 00:35:49.434 } 00:35:49.434 [2024-11-26 17:32:26.633714] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:35:49.434 [2024-11-26 17:32:26.633845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71098 ] 00:35:49.434 [2024-11-26 17:32:26.814941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.694 [2024-11-26 17:32:26.968558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.262 Running I/O for 5 seconds... 00:35:52.138 36907.00 IOPS, 144.17 MiB/s [2024-11-26T17:32:30.549Z] 35765.00 IOPS, 139.71 MiB/s [2024-11-26T17:32:31.487Z] 35963.67 IOPS, 140.48 MiB/s [2024-11-26T17:32:32.424Z] 36070.75 IOPS, 140.90 MiB/s 00:35:54.978 Latency(us) 00:35:54.978 [2024-11-26T17:32:32.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.978 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:35:54.978 xnvme_bdev : 5.00 35956.63 140.46 0.00 0.00 1775.76 195.86 7955.90 00:35:54.978 [2024-11-26T17:32:32.424Z] =================================================================================================================== 00:35:54.978 [2024-11-26T17:32:32.424Z] Total : 35956.63 140.46 0.00 0.00 1775.76 195.86 7955.90 00:35:56.358 ************************************ 00:35:56.358 END TEST xnvme_bdevperf 00:35:56.358 ************************************ 00:35:56.358 00:35:56.358 real 0m14.376s 00:35:56.358 user 0m5.783s 00:35:56.358 sys 0m6.076s 00:35:56.358 17:32:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.358 17:32:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.358 17:32:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:35:56.358 17:32:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.358 17:32:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.358 17:32:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:56.358 ************************************ 00:35:56.358 START TEST xnvme_fio_plugin 00:35:56.358 ************************************ 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:35:56.358 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:56.662 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:56.662 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:56.662 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:35:56.662 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:56.662 17:32:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:35:56.662 { 00:35:56.662 "subsystems": [ 00:35:56.662 { 00:35:56.662 "subsystem": "bdev", 00:35:56.662 "config": [ 00:35:56.662 { 00:35:56.662 "params": { 00:35:56.662 "io_mechanism": "libaio", 00:35:56.662 "conserve_cpu": false, 00:35:56.662 "filename": "/dev/nvme0n1", 00:35:56.662 "name": "xnvme_bdev" 00:35:56.662 }, 00:35:56.662 "method": "bdev_xnvme_create" 00:35:56.662 }, 00:35:56.662 { 00:35:56.662 "method": "bdev_wait_for_examine" 00:35:56.662 } 00:35:56.662 ] 00:35:56.662 } 00:35:56.662 ] 00:35:56.662 } 00:35:56.662 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:35:56.662 fio-3.35 00:35:56.662 Starting 1 thread 00:36:03.240 00:36:03.240 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71227: Tue Nov 26 17:32:39 2024 00:36:03.240 read: IOPS=42.3k, BW=165MiB/s (173MB/s)(827MiB/5001msec) 00:36:03.240 slat (usec): min=4, max=768, avg=20.00, stdev=25.77 00:36:03.240 clat (usec): min=101, max=9985, avg=890.29, stdev=600.05 00:36:03.240 lat (usec): min=142, max=9999, avg=910.29, stdev=605.96 00:36:03.240 clat percentiles (usec): 00:36:03.240 | 1.00th=[ 180], 5.00th=[ 258], 10.00th=[ 330], 20.00th=[ 457], 00:36:03.240 | 30.00th=[ 570], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 881], 00:36:03.240 | 70.00th=[ 1012], 80.00th=[ 1172], 90.00th=[ 1450], 95.00th=[ 1975], 00:36:03.240 | 99.00th=[ 3392], 99.50th=[ 3916], 99.90th=[ 4686], 99.95th=[ 4948], 00:36:03.240 | 99.99th=[ 5538] 00:36:03.240 bw ( KiB/s): min=148768, max=228376, per=100.00%, avg=170183.33, stdev=25156.88, samples=9 00:36:03.240 iops : min=37192, max=57094, avg=42545.78, stdev=6289.17, samples=9 00:36:03.240 lat (usec) : 250=4.51%, 500=19.27%, 750=24.14%, 1000=21.25% 00:36:03.240 lat (msec) : 2=25.92%, 4=4.47%, 10=0.43% 00:36:03.240 cpu : usr=29.12%, sys=52.12%, ctx=63, majf=0, minf=764 00:36:03.240 IO depths : 1=0.2%, 2=1.3%, 4=4.3%, 8=11.0%, 16=25.3%, 32=56.1%, >=64=1.8% 00:36:03.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.240 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:36:03.240 issued rwts: total=211785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:03.240 00:36:03.240 Run status group 0 (all jobs): 00:36:03.240 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=827MiB (867MB), run=5001-5001msec 00:36:04.180 ----------------------------------------------------- 00:36:04.180 Suppressions used: 00:36:04.180 count bytes template 00:36:04.180 1 11 /usr/src/fio/parse.c 00:36:04.180 1 8 libtcmalloc_minimal.so 00:36:04.180 1 904 libcrypto.so 00:36:04.180 ----------------------------------------------------- 00:36:04.180 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:04.180 17:32:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:04.180 { 00:36:04.180 "subsystems": [ 00:36:04.180 { 00:36:04.180 "subsystem": "bdev", 00:36:04.180 "config": [ 00:36:04.180 { 00:36:04.180 "params": { 00:36:04.180 "io_mechanism": "libaio", 00:36:04.180 "conserve_cpu": false, 00:36:04.180 "filename": "/dev/nvme0n1", 00:36:04.180 "name": "xnvme_bdev" 00:36:04.180 }, 00:36:04.180 "method": "bdev_xnvme_create" 00:36:04.180 }, 00:36:04.180 { 00:36:04.180 "method": "bdev_wait_for_examine" 00:36:04.180 } 00:36:04.180 ] 00:36:04.180 } 00:36:04.180 ] 00:36:04.180 } 00:36:04.441 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:04.441 fio-3.35 00:36:04.441 Starting 1 thread 00:36:11.006 00:36:11.006 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71326: Tue Nov 26 17:32:47 2024 00:36:11.006 write: IOPS=36.6k, BW=143MiB/s (150MB/s)(716MiB/5001msec); 0 zone resets 00:36:11.006 slat (usec): min=4, max=884, avg=23.06, stdev=29.31 00:36:11.006 clat (usec): min=106, max=6240, avg=1027.09, stdev=736.68 00:36:11.006 lat (usec): min=150, max=6333, avg=1050.15, stdev=745.16 00:36:11.006 clat percentiles (usec): 00:36:11.006 | 1.00th=[ 196], 5.00th=[ 285], 10.00th=[ 363], 20.00th=[ 498], 00:36:11.006 | 30.00th=[ 619], 40.00th=[ 734], 50.00th=[ 848], 60.00th=[ 963], 00:36:11.006 | 70.00th=[ 1123], 80.00th=[ 1336], 90.00th=[ 1926], 95.00th=[ 2606], 00:36:11.006 | 99.00th=[ 3916], 99.50th=[ 4293], 99.90th=[ 4883], 99.95th=[ 5080], 00:36:11.006 | 99.99th=[ 5473] 00:36:11.006 bw ( KiB/s): min=109944, max=170144, per=100.00%, avg=146998.22, stdev=20011.13, samples=9 00:36:11.006 iops : min=27486, max=42536, avg=36749.56, stdev=5002.78, samples=9 00:36:11.006 lat (usec) : 250=3.17%, 500=16.93%, 750=21.52%, 1000=20.97% 00:36:11.006 lat (msec) : 2=28.09%, 4=8.47%, 10=0.85% 00:36:11.006 cpu : usr=31.60%, sys=50.36%, ctx=148, majf=0, minf=764 00:36:11.006 IO depths : 1=0.2%, 2=1.4%, 4=4.3%, 8=10.6%, 16=25.1%, 32=56.6%, >=64=1.8% 00:36:11.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.006 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:36:11.006 issued rwts: total=0,183198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.006 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:11.006 00:36:11.006 Run status group 0 (all jobs): 00:36:11.006 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=716MiB (750MB), run=5001-5001msec 00:36:11.943 ----------------------------------------------------- 00:36:11.943 Suppressions used: 00:36:11.943 count bytes template 00:36:11.943 1 11 /usr/src/fio/parse.c 00:36:11.943 1 8 libtcmalloc_minimal.so 00:36:11.943 1 904 libcrypto.so 00:36:11.943 ----------------------------------------------------- 00:36:11.943 00:36:11.943 00:36:11.943 real 0m15.579s 00:36:11.943 user 0m7.372s 00:36:11.943 sys 0m6.030s 00:36:11.943 17:32:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.943 17:32:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:11.943 ************************************ 00:36:11.943 END TEST xnvme_fio_plugin 00:36:11.943 ************************************ 00:36:12.205 17:32:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:36:12.205 17:32:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:36:12.205 17:32:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:36:12.205 17:32:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:36:12.205 17:32:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:12.205 17:32:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:12.205 17:32:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:12.205 ************************************ 00:36:12.205 START TEST xnvme_rpc 00:36:12.205 ************************************ 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71413 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71413 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71413 ']' 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.205 17:32:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:12.205 [2024-11-26 17:32:49.516356] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:12.205 [2024-11-26 17:32:49.516515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71413 ] 00:36:12.462 [2024-11-26 17:32:49.700250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.462 [2024-11-26 17:32:49.875342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 xnvme_bdev 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:36:13.841 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71413 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71413 ']' 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71413 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71413 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:14.100 killing process with pid 71413 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71413' 00:36:14.100 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71413 00:36:14.101 17:32:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71413 00:36:17.396 00:36:17.396 real 0m5.049s 00:36:17.396 user 0m5.050s 00:36:17.396 sys 0m0.726s 00:36:17.396 17:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.396 17:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:17.396 ************************************ 00:36:17.396 END TEST xnvme_rpc 00:36:17.396 ************************************ 00:36:17.396 17:32:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:36:17.396 17:32:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:17.396 17:32:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.396 17:32:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:17.396 ************************************ 00:36:17.396 START TEST xnvme_bdevperf 00:36:17.396 ************************************ 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:17.396 17:32:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:17.396 { 00:36:17.396 "subsystems": [ 00:36:17.396 { 00:36:17.396 "subsystem": "bdev", 00:36:17.396 "config": [ 00:36:17.396 { 00:36:17.396 "params": { 00:36:17.396 "io_mechanism": "libaio", 00:36:17.396 "conserve_cpu": true, 00:36:17.396 "filename": "/dev/nvme0n1", 00:36:17.396 "name": "xnvme_bdev" 00:36:17.396 }, 00:36:17.396 "method": "bdev_xnvme_create" 00:36:17.396 }, 00:36:17.396 { 00:36:17.396 "method": "bdev_wait_for_examine" 00:36:17.396 } 00:36:17.396 ] 00:36:17.396 } 00:36:17.396 ] 00:36:17.396 } 00:36:17.396 [2024-11-26 17:32:54.616697] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:17.396 [2024-11-26 17:32:54.616869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71511 ] 00:36:17.396 [2024-11-26 17:32:54.820513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.656 [2024-11-26 17:32:54.963297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.224 Running I/O for 5 seconds... 00:36:20.128 37491.00 IOPS, 146.45 MiB/s [2024-11-26T17:32:58.515Z] 37736.00 IOPS, 147.41 MiB/s [2024-11-26T17:32:59.451Z] 35765.33 IOPS, 139.71 MiB/s [2024-11-26T17:33:00.447Z] 35755.00 IOPS, 139.67 MiB/s 00:36:23.001 Latency(us) 00:36:23.001 [2024-11-26T17:33:00.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.001 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:23.001 xnvme_bdev : 5.00 35447.27 138.47 0.00 0.00 1801.24 188.70 11390.10 00:36:23.001 [2024-11-26T17:33:00.447Z] =================================================================================================================== 00:36:23.001 [2024-11-26T17:33:00.447Z] Total : 35447.27 138.47 0.00 0.00 1801.24 188.70 11390.10 00:36:24.376 17:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:24.376 17:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:36:24.376 17:33:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:24.376 17:33:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:24.376 17:33:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.376 { 00:36:24.376 "subsystems": [ 00:36:24.376 { 00:36:24.376 "subsystem": "bdev", 00:36:24.376 "config": [ 00:36:24.376 { 00:36:24.376 "params": { 00:36:24.376 "io_mechanism": "libaio", 00:36:24.376 "conserve_cpu": true, 00:36:24.376 "filename": "/dev/nvme0n1", 00:36:24.376 "name": "xnvme_bdev" 00:36:24.376 }, 00:36:24.376 "method": "bdev_xnvme_create" 00:36:24.376 }, 00:36:24.376 { 00:36:24.376 "method": "bdev_wait_for_examine" 00:36:24.376 } 00:36:24.376 ] 00:36:24.376 } 00:36:24.376 ] 00:36:24.376 } 00:36:24.376 [2024-11-26 17:33:01.727254] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:24.376 [2024-11-26 17:33:01.727401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71592 ] 00:36:24.636 [2024-11-26 17:33:01.911274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.636 [2024-11-26 17:33:02.043340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.205 Running I/O for 5 seconds... 00:36:27.077 34271.00 IOPS, 133.87 MiB/s [2024-11-26T17:33:05.899Z] 34912.00 IOPS, 136.38 MiB/s [2024-11-26T17:33:06.465Z] 34870.33 IOPS, 136.21 MiB/s [2024-11-26T17:33:07.858Z] 34563.25 IOPS, 135.01 MiB/s 00:36:30.412 Latency(us) 00:36:30.412 [2024-11-26T17:33:07.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.412 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:36:30.412 xnvme_bdev : 5.00 36362.95 142.04 0.00 0.00 1755.58 78.70 43041.98 00:36:30.412 [2024-11-26T17:33:07.858Z] =================================================================================================================== 00:36:30.412 [2024-11-26T17:33:07.858Z] Total : 36362.95 142.04 0.00 0.00 1755.58 78.70 43041.98 00:36:31.350 00:36:31.350 real 0m14.256s 00:36:31.350 user 0m5.806s 00:36:31.350 sys 0m6.073s 00:36:31.350 17:33:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.350 17:33:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:31.350 ************************************ 00:36:31.350 END TEST xnvme_bdevperf 00:36:31.350 ************************************ 00:36:31.610 17:33:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:36:31.610 17:33:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:31.610 17:33:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.610 17:33:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:31.610 ************************************ 00:36:31.610 START TEST xnvme_fio_plugin 00:36:31.610 ************************************ 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:31.610 17:33:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:31.610 { 00:36:31.610 "subsystems": [ 00:36:31.610 { 00:36:31.610 "subsystem": "bdev", 00:36:31.610 "config": [ 00:36:31.610 { 00:36:31.610 "params": { 00:36:31.610 "io_mechanism": "libaio", 00:36:31.610 "conserve_cpu": true, 00:36:31.610 "filename": "/dev/nvme0n1", 00:36:31.610 "name": "xnvme_bdev" 00:36:31.610 }, 00:36:31.610 "method": "bdev_xnvme_create" 00:36:31.610 }, 00:36:31.610 { 00:36:31.610 "method": "bdev_wait_for_examine" 00:36:31.610 } 00:36:31.610 ] 00:36:31.610 } 00:36:31.610 ] 00:36:31.610 } 00:36:31.869 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:31.869 fio-3.35 00:36:31.869 Starting 1 thread 00:36:38.444 00:36:38.444 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71717: Tue Nov 26 17:33:14 2024 00:36:38.444 read: IOPS=44.0k, BW=172MiB/s (180MB/s)(860MiB/5001msec) 00:36:38.444 slat (usec): min=3, max=3261, avg=19.47, stdev=31.42 00:36:38.444 clat (usec): min=103, max=5625, avg=851.74, stdev=535.46 00:36:38.444 lat (usec): min=157, max=5683, avg=871.21, stdev=539.32 00:36:38.444 clat percentiles (usec): 00:36:38.444 | 1.00th=[ 180], 5.00th=[ 255], 10.00th=[ 318], 20.00th=[ 441], 00:36:38.444 | 30.00th=[ 553], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 881], 00:36:38.444 | 70.00th=[ 996], 80.00th=[ 1139], 90.00th=[ 1369], 95.00th=[ 1696], 00:36:38.444 | 99.00th=[ 3130], 99.50th=[ 3720], 99.90th=[ 4424], 99.95th=[ 4686], 00:36:38.444 | 99.99th=[ 5014] 00:36:38.444 bw ( KiB/s): min=154224, max=203144, per=100.00%, avg=178526.22, stdev=14610.77, samples=9 00:36:38.444 iops : min=38556, max=50786, avg=44631.56, stdev=3652.69, samples=9 00:36:38.444 lat (usec) : 250=4.60%, 500=20.55%, 750=22.61%, 1000=22.67% 00:36:38.444 lat (msec) : 2=26.28%, 4=2.99%, 10=0.30% 00:36:38.444 cpu : usr=29.26%, sys=53.66%, ctx=102, majf=0, minf=764 00:36:38.444 IO depths : 1=0.2%, 2=1.1%, 4=4.3%, 8=11.3%, 16=25.8%, 32=55.5%, >=64=1.8% 00:36:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.444 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:36:38.444 issued rwts: total=220069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.444 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:38.444 00:36:38.444 Run status group 0 (all jobs): 00:36:38.444 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=860MiB (901MB), run=5001-5001msec 00:36:39.388 ----------------------------------------------------- 00:36:39.388 Suppressions used: 00:36:39.388 count bytes template 00:36:39.388 1 11 /usr/src/fio/parse.c 00:36:39.388 1 8 libtcmalloc_minimal.so 00:36:39.388 1 904 libcrypto.so 00:36:39.388 ----------------------------------------------------- 00:36:39.388 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:39.388 17:33:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:39.388 { 00:36:39.388 "subsystems": [ 00:36:39.388 { 00:36:39.388 "subsystem": "bdev", 00:36:39.388 "config": [ 00:36:39.388 { 00:36:39.388 "params": { 00:36:39.388 "io_mechanism": "libaio", 00:36:39.388 "conserve_cpu": true, 00:36:39.388 "filename": "/dev/nvme0n1", 00:36:39.388 "name": "xnvme_bdev" 00:36:39.388 }, 00:36:39.389 "method": "bdev_xnvme_create" 00:36:39.389 }, 00:36:39.389 { 00:36:39.389 "method": "bdev_wait_for_examine" 00:36:39.389 } 00:36:39.389 ] 00:36:39.389 } 00:36:39.389 ] 00:36:39.389 } 00:36:39.648 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:39.648 fio-3.35 00:36:39.648 Starting 1 thread 00:36:46.209 00:36:46.209 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71820: Tue Nov 26 17:33:22 2024 00:36:46.209 write: IOPS=35.8k, BW=140MiB/s (147MB/s)(699MiB/5001msec); 0 zone resets 00:36:46.209 slat (usec): min=4, max=2609, avg=23.54, stdev=34.11 00:36:46.209 clat (usec): min=44, max=7210, avg=1060.27, stdev=751.05 00:36:46.209 lat (usec): min=107, max=7263, avg=1083.82, stdev=758.83 00:36:46.209 clat percentiles (usec): 00:36:46.209 | 1.00th=[ 194], 5.00th=[ 281], 10.00th=[ 363], 20.00th=[ 510], 00:36:46.209 | 30.00th=[ 644], 40.00th=[ 766], 50.00th=[ 889], 60.00th=[ 1012], 00:36:46.209 | 70.00th=[ 1172], 80.00th=[ 1385], 90.00th=[ 1975], 95.00th=[ 2638], 00:36:46.209 | 99.00th=[ 3949], 99.50th=[ 4359], 99.90th=[ 5145], 99.95th=[ 5538], 00:36:46.209 | 99.99th=[ 6259] 00:36:46.209 bw ( KiB/s): min=101176, max=175273, per=99.98%, avg=143144.11, stdev=28257.42, samples=9 00:36:46.209 iops : min=25294, max=43818, avg=35786.00, stdev=7064.32, samples=9 00:36:46.209 lat (usec) : 50=0.01%, 250=3.38%, 500=15.81%, 750=19.50%, 1000=20.42% 00:36:46.209 lat (msec) : 2=31.09%, 4=8.86%, 10=0.94% 00:36:46.209 cpu : usr=30.88%, sys=50.82%, ctx=92, majf=0, minf=764 00:36:46.209 IO depths : 1=0.2%, 2=1.4%, 4=4.4%, 8=10.9%, 16=25.0%, 32=56.3%, >=64=1.8% 00:36:46.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.209 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:36:46.209 issued rwts: total=0,179008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:46.209 00:36:46.209 Run status group 0 (all jobs): 00:36:46.209 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=699MiB (733MB), run=5001-5001msec 00:36:47.216 ----------------------------------------------------- 00:36:47.216 Suppressions used: 00:36:47.216 count bytes template 00:36:47.216 1 11 /usr/src/fio/parse.c 00:36:47.216 1 8 libtcmalloc_minimal.so 00:36:47.216 1 904 libcrypto.so 00:36:47.216 ----------------------------------------------------- 00:36:47.216 00:36:47.216 00:36:47.216 real 0m15.603s 00:36:47.216 user 0m7.347s 00:36:47.216 sys 0m6.163s 00:36:47.216 17:33:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.216 17:33:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:47.216 ************************************ 00:36:47.216 END TEST xnvme_fio_plugin 00:36:47.216 ************************************ 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:36:47.216 17:33:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:36:47.216 17:33:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:47.216 17:33:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.216 17:33:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:47.216 ************************************ 00:36:47.216 START TEST xnvme_rpc 00:36:47.216 ************************************ 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71912 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71912 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71912 ']' 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.216 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.217 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.217 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.217 17:33:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:47.475 [2024-11-26 17:33:24.627130] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:47.475 [2024-11-26 17:33:24.627308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71912 ] 00:36:47.475 [2024-11-26 17:33:24.815382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.733 [2024-11-26 17:33:24.968947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.669 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.669 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:36:48.669 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:36:48.669 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.669 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:48.669 xnvme_bdev 00:36:48.669 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71912 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71912 ']' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71912 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71912 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:48.999 killing process with pid 71912 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71912' 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71912 00:36:48.999 17:33:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71912 00:36:52.289 00:36:52.290 real 0m4.879s 00:36:52.290 user 0m4.804s 00:36:52.290 sys 0m0.722s 00:36:52.290 17:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:52.290 17:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:52.290 ************************************ 00:36:52.290 END TEST xnvme_rpc 00:36:52.290 ************************************ 00:36:52.290 17:33:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:36:52.290 17:33:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:52.290 17:33:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:52.290 17:33:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:52.290 ************************************ 00:36:52.290 START TEST xnvme_bdevperf 00:36:52.290 ************************************ 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:52.290 17:33:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:52.290 { 00:36:52.290 "subsystems": [ 00:36:52.290 { 00:36:52.290 "subsystem": "bdev", 00:36:52.290 "config": [ 00:36:52.290 { 00:36:52.290 "params": { 00:36:52.290 "io_mechanism": "io_uring", 00:36:52.290 "conserve_cpu": false, 00:36:52.290 "filename": "/dev/nvme0n1", 00:36:52.290 "name": "xnvme_bdev" 00:36:52.290 }, 00:36:52.290 "method": "bdev_xnvme_create" 00:36:52.290 }, 00:36:52.290 { 00:36:52.290 "method": "bdev_wait_for_examine" 00:36:52.290 } 00:36:52.290 ] 00:36:52.290 } 00:36:52.290 ] 00:36:52.290 } 00:36:52.290 [2024-11-26 17:33:29.544209] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:52.290 [2024-11-26 17:33:29.544357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71998 ] 00:36:52.290 [2024-11-26 17:33:29.719693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.616 [2024-11-26 17:33:29.870246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.875 Running I/O for 5 seconds... 00:36:55.190 63155.00 IOPS, 246.70 MiB/s [2024-11-26T17:33:33.572Z] 60307.50 IOPS, 235.58 MiB/s [2024-11-26T17:33:34.505Z] 55243.00 IOPS, 215.79 MiB/s [2024-11-26T17:33:35.438Z] 54839.25 IOPS, 214.22 MiB/s 00:36:57.992 Latency(us) 00:36:57.992 [2024-11-26T17:33:35.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.992 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:57.992 xnvme_bdev : 5.00 53396.55 208.58 0.00 0.00 1194.74 357.73 7040.11 00:36:57.992 [2024-11-26T17:33:35.438Z] =================================================================================================================== 00:36:57.992 [2024-11-26T17:33:35.438Z] Total : 53396.55 208.58 0.00 0.00 1194.74 357.73 7040.11 00:36:59.370 17:33:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:59.370 17:33:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:36:59.370 17:33:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:59.370 17:33:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:59.370 17:33:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:59.370 { 00:36:59.370 "subsystems": [ 00:36:59.370 { 00:36:59.370 "subsystem": "bdev", 00:36:59.370 "config": [ 00:36:59.370 { 00:36:59.370 "params": { 00:36:59.370 "io_mechanism": "io_uring", 00:36:59.370 "conserve_cpu": false, 00:36:59.370 "filename": "/dev/nvme0n1", 00:36:59.370 "name": "xnvme_bdev" 00:36:59.370 }, 00:36:59.370 "method": "bdev_xnvme_create" 00:36:59.370 }, 00:36:59.370 { 00:36:59.370 "method": "bdev_wait_for_examine" 00:36:59.370 } 00:36:59.370 ] 00:36:59.370 } 00:36:59.370 ] 00:36:59.370 } 00:36:59.370 [2024-11-26 17:33:36.691701] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:59.370 [2024-11-26 17:33:36.691848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:36:59.629 [2024-11-26 17:33:36.877084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.629 [2024-11-26 17:33:37.023994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.196 Running I/O for 5 seconds... 00:37:02.067 31552.00 IOPS, 123.25 MiB/s [2024-11-26T17:33:40.448Z] 29088.00 IOPS, 113.62 MiB/s [2024-11-26T17:33:41.825Z] 28117.33 IOPS, 109.83 MiB/s [2024-11-26T17:33:42.759Z] 29252.75 IOPS, 114.27 MiB/s 00:37:05.313 Latency(us) 00:37:05.313 [2024-11-26T17:33:42.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.313 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:37:05.313 xnvme_bdev : 5.00 29689.37 115.97 0.00 0.00 2147.82 377.40 9901.95 00:37:05.313 [2024-11-26T17:33:42.759Z] =================================================================================================================== 00:37:05.313 [2024-11-26T17:33:42.759Z] Total : 29689.37 115.97 0.00 0.00 2147.82 377.40 9901.95 00:37:06.686 00:37:06.686 real 0m14.469s 00:37:06.686 user 0m7.575s 00:37:06.686 sys 0m6.674s 00:37:06.686 17:33:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:06.686 17:33:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.686 ************************************ 00:37:06.686 END TEST xnvme_bdevperf 00:37:06.686 ************************************ 00:37:06.686 17:33:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:37:06.686 17:33:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:06.686 17:33:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:06.686 17:33:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:06.686 ************************************ 00:37:06.686 START TEST xnvme_fio_plugin 00:37:06.686 ************************************ 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:06.686 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:06.687 17:33:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:06.687 { 00:37:06.687 "subsystems": [ 00:37:06.687 { 00:37:06.687 "subsystem": "bdev", 00:37:06.687 "config": [ 00:37:06.687 { 00:37:06.687 "params": { 00:37:06.687 "io_mechanism": "io_uring", 00:37:06.687 "conserve_cpu": false, 00:37:06.687 "filename": "/dev/nvme0n1", 00:37:06.687 "name": "xnvme_bdev" 00:37:06.687 }, 00:37:06.687 "method": "bdev_xnvme_create" 00:37:06.687 }, 00:37:06.687 { 00:37:06.687 "method": "bdev_wait_for_examine" 00:37:06.687 } 00:37:06.687 ] 00:37:06.687 } 00:37:06.687 ] 00:37:06.687 } 00:37:06.944 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:06.944 fio-3.35 00:37:06.944 Starting 1 thread 00:37:13.524 00:37:13.524 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72215: Tue Nov 26 17:33:50 2024 00:37:13.524 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(696MiB/5002msec) 00:37:13.524 slat (nsec): min=2813, max=80586, avg=5825.80, stdev=2887.58 00:37:13.524 clat (usec): min=662, max=3283, avg=1567.04, stdev=417.90 00:37:13.524 lat (usec): min=665, max=3295, avg=1572.87, stdev=419.57 00:37:13.524 clat percentiles (usec): 00:37:13.524 | 1.00th=[ 807], 5.00th=[ 930], 10.00th=[ 1037], 20.00th=[ 1188], 00:37:13.524 | 30.00th=[ 1303], 40.00th=[ 1434], 50.00th=[ 1549], 60.00th=[ 1663], 00:37:13.524 | 70.00th=[ 1778], 80.00th=[ 1909], 90.00th=[ 2114], 95.00th=[ 2311], 00:37:13.524 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 2933], 99.95th=[ 3032], 00:37:13.524 | 99.99th=[ 3163] 00:37:13.524 bw ( KiB/s): min=124416, max=159744, per=100.00%, avg=143303.11, stdev=11854.89, samples=9 00:37:13.524 iops : min=31104, max=39936, avg=35825.78, stdev=2963.72, samples=9 00:37:13.524 lat (usec) : 750=0.26%, 1000=7.97% 00:37:13.524 lat (msec) : 2=76.76%, 4=15.00% 00:37:13.524 cpu : usr=38.27%, sys=60.65%, ctx=11, majf=0, minf=762 00:37:13.525 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:37:13.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.525 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:37:13.525 issued rwts: total=178048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:13.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:13.525 00:37:13.525 Run status group 0 (all jobs): 00:37:13.525 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=696MiB (729MB), run=5002-5002msec 00:37:14.461 ----------------------------------------------------- 00:37:14.461 Suppressions used: 00:37:14.461 count bytes template 00:37:14.461 1 11 /usr/src/fio/parse.c 00:37:14.461 1 8 libtcmalloc_minimal.so 00:37:14.461 1 904 libcrypto.so 00:37:14.461 ----------------------------------------------------- 00:37:14.461 00:37:14.461 17:33:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:14.462 { 00:37:14.462 "subsystems": [ 00:37:14.462 { 00:37:14.462 "subsystem": "bdev", 00:37:14.462 "config": [ 00:37:14.462 { 00:37:14.462 "params": { 00:37:14.462 "io_mechanism": "io_uring", 00:37:14.462 "conserve_cpu": false, 00:37:14.462 "filename": "/dev/nvme0n1", 00:37:14.462 "name": "xnvme_bdev" 00:37:14.462 }, 00:37:14.462 "method": "bdev_xnvme_create" 00:37:14.462 }, 00:37:14.462 { 00:37:14.462 "method": "bdev_wait_for_examine" 00:37:14.462 } 00:37:14.462 ] 00:37:14.462 } 00:37:14.462 ] 00:37:14.462 } 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:14.462 17:33:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:14.720 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:14.720 fio-3.35 00:37:14.720 Starting 1 thread 00:37:21.288 00:37:21.288 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72312: Tue Nov 26 17:33:57 2024 00:37:21.288 write: IOPS=29.6k, BW=116MiB/s (121MB/s)(578MiB/5002msec); 0 zone resets 00:37:21.288 slat (nsec): min=2887, max=73648, avg=7217.47, stdev=2521.74 00:37:21.288 clat (usec): min=692, max=4337, avg=1874.71, stdev=347.70 00:37:21.288 lat (usec): min=695, max=4348, avg=1881.93, stdev=348.87 00:37:21.288 clat percentiles (usec): 00:37:21.288 | 1.00th=[ 930], 5.00th=[ 1205], 10.00th=[ 1434], 20.00th=[ 1647], 00:37:21.288 | 30.00th=[ 1745], 40.00th=[ 1811], 50.00th=[ 1893], 60.00th=[ 1958], 00:37:21.288 | 70.00th=[ 2024], 80.00th=[ 2114], 90.00th=[ 2278], 95.00th=[ 2442], 00:37:21.288 | 99.00th=[ 2671], 99.50th=[ 2737], 99.90th=[ 2933], 99.95th=[ 3359], 00:37:21.288 | 99.99th=[ 4178] 00:37:21.288 bw ( KiB/s): min=108544, max=133120, per=100.00%, avg=120320.00, stdev=8215.96, samples=9 00:37:21.288 iops : min=27136, max=33280, avg=30080.00, stdev=2053.99, samples=9 00:37:21.288 lat (usec) : 750=0.07%, 1000=1.70% 00:37:21.288 lat (msec) : 2=64.25%, 4=33.95%, 10=0.02% 00:37:21.288 cpu : usr=39.19%, sys=59.67%, ctx=10, majf=0, minf=762 00:37:21.288 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:37:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.288 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:37:21.288 issued rwts: total=0,148064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.288 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.288 00:37:21.288 Run status group 0 (all jobs): 00:37:21.288 WRITE: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=578MiB (606MB), run=5002-5002msec 00:37:22.228 ----------------------------------------------------- 00:37:22.228 Suppressions used: 00:37:22.228 count bytes template 00:37:22.228 1 11 /usr/src/fio/parse.c 00:37:22.228 1 8 libtcmalloc_minimal.so 00:37:22.228 1 904 libcrypto.so 00:37:22.228 ----------------------------------------------------- 00:37:22.228 00:37:22.228 00:37:22.228 real 0m15.577s 00:37:22.228 user 0m8.284s 00:37:22.228 sys 0m6.913s 00:37:22.228 17:33:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.228 17:33:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:22.228 ************************************ 00:37:22.228 END TEST xnvme_fio_plugin 00:37:22.228 ************************************ 00:37:22.228 17:33:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:22.228 17:33:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:37:22.228 17:33:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:37:22.228 17:33:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:22.228 17:33:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:22.228 17:33:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.228 17:33:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:22.228 ************************************ 00:37:22.228 START TEST xnvme_rpc 00:37:22.228 ************************************ 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72407 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72407 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72407 ']' 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:22.228 17:33:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:22.489 [2024-11-26 17:33:59.708738] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:22.489 [2024-11-26 17:33:59.708925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72407 ] 00:37:22.489 [2024-11-26 17:33:59.889950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.748 [2024-11-26 17:34:00.032913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 xnvme_bdev 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72407 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72407 ']' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72407 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72407 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:24.154 killing process with pid 72407 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72407' 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72407 00:37:24.154 17:34:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72407 00:37:27.471 00:37:27.471 real 0m4.787s 00:37:27.471 user 0m4.672s 00:37:27.471 sys 0m0.726s 00:37:27.471 17:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.471 17:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:27.471 ************************************ 00:37:27.471 END TEST xnvme_rpc 00:37:27.471 ************************************ 00:37:27.471 17:34:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:37:27.471 17:34:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:27.471 17:34:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.471 17:34:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:27.471 ************************************ 00:37:27.471 START TEST xnvme_bdevperf 00:37:27.471 ************************************ 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:27.471 17:34:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:27.471 { 00:37:27.471 "subsystems": [ 00:37:27.471 { 00:37:27.471 "subsystem": "bdev", 00:37:27.471 "config": [ 00:37:27.471 { 00:37:27.471 "params": { 00:37:27.471 "io_mechanism": "io_uring", 00:37:27.471 "conserve_cpu": true, 00:37:27.471 "filename": "/dev/nvme0n1", 00:37:27.471 "name": "xnvme_bdev" 00:37:27.471 }, 00:37:27.471 "method": "bdev_xnvme_create" 00:37:27.471 }, 00:37:27.471 { 00:37:27.471 "method": "bdev_wait_for_examine" 00:37:27.471 } 00:37:27.471 ] 00:37:27.471 } 00:37:27.471 ] 00:37:27.471 } 00:37:27.471 [2024-11-26 17:34:04.550437] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:27.471 [2024-11-26 17:34:04.550561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72501 ] 00:37:27.471 [2024-11-26 17:34:04.733766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.471 [2024-11-26 17:34:04.878475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.039 Running I/O for 5 seconds... 00:37:29.904 41005.00 IOPS, 160.18 MiB/s [2024-11-26T17:34:08.723Z] 43792.50 IOPS, 171.06 MiB/s [2024-11-26T17:34:09.659Z] 45105.33 IOPS, 176.19 MiB/s [2024-11-26T17:34:10.593Z] 45612.00 IOPS, 178.17 MiB/s 00:37:33.147 Latency(us) 00:37:33.147 [2024-11-26T17:34:10.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.147 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:37:33.147 xnvme_bdev : 5.00 44698.33 174.60 0.00 0.00 1427.36 287.97 5637.81 00:37:33.147 [2024-11-26T17:34:10.593Z] =================================================================================================================== 00:37:33.147 [2024-11-26T17:34:10.593Z] Total : 44698.33 174.60 0.00 0.00 1427.36 287.97 5637.81 00:37:34.522 17:34:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:34.522 17:34:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:37:34.522 17:34:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:34.522 17:34:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:34.522 17:34:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:34.522 { 00:37:34.522 "subsystems": [ 00:37:34.522 { 00:37:34.522 "subsystem": "bdev", 00:37:34.522 "config": [ 00:37:34.522 { 00:37:34.522 "params": { 00:37:34.522 "io_mechanism": "io_uring", 00:37:34.522 "conserve_cpu": true, 00:37:34.522 "filename": "/dev/nvme0n1", 00:37:34.522 "name": "xnvme_bdev" 00:37:34.522 }, 00:37:34.522 "method": "bdev_xnvme_create" 00:37:34.522 }, 00:37:34.522 { 00:37:34.522 "method": "bdev_wait_for_examine" 00:37:34.522 } 00:37:34.522 ] 00:37:34.522 } 00:37:34.522 ] 00:37:34.522 } 00:37:34.522 [2024-11-26 17:34:11.759601] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:34.522 [2024-11-26 17:34:11.759757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72583 ] 00:37:34.522 [2024-11-26 17:34:11.936219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.781 [2024-11-26 17:34:12.085187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.348 Running I/O for 5 seconds... 00:37:37.220 39375.00 IOPS, 153.81 MiB/s [2024-11-26T17:34:15.602Z] 34631.50 IOPS, 135.28 MiB/s [2024-11-26T17:34:16.540Z] 33138.00 IOPS, 129.45 MiB/s [2024-11-26T17:34:17.918Z] 31685.50 IOPS, 123.77 MiB/s [2024-11-26T17:34:17.918Z] 33182.00 IOPS, 129.62 MiB/s 00:37:40.472 Latency(us) 00:37:40.472 [2024-11-26T17:34:17.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.472 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:37:40.473 xnvme_bdev : 5.01 33120.40 129.38 0.00 0.00 1925.20 676.11 8528.27 00:37:40.473 [2024-11-26T17:34:17.919Z] =================================================================================================================== 00:37:40.473 [2024-11-26T17:34:17.919Z] Total : 33120.40 129.38 0.00 0.00 1925.20 676.11 8528.27 00:37:41.410 00:37:41.410 real 0m14.336s 00:37:41.410 user 0m8.161s 00:37:41.410 sys 0m5.719s 00:37:41.410 17:34:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.410 17:34:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:41.410 ************************************ 00:37:41.410 END TEST xnvme_bdevperf 00:37:41.410 ************************************ 00:37:41.410 17:34:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:37:41.410 17:34:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.410 17:34:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.410 17:34:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:41.410 ************************************ 00:37:41.410 START TEST xnvme_fio_plugin 00:37:41.410 ************************************ 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:41.410 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:41.411 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:41.411 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:41.411 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:41.411 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.411 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:41.671 17:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:41.671 { 00:37:41.671 "subsystems": [ 00:37:41.671 { 00:37:41.671 "subsystem": "bdev", 00:37:41.671 "config": [ 00:37:41.671 { 00:37:41.671 "params": { 00:37:41.671 "io_mechanism": "io_uring", 00:37:41.671 "conserve_cpu": true, 00:37:41.671 "filename": "/dev/nvme0n1", 00:37:41.671 "name": "xnvme_bdev" 00:37:41.671 }, 00:37:41.671 "method": "bdev_xnvme_create" 00:37:41.671 }, 00:37:41.671 { 00:37:41.671 "method": "bdev_wait_for_examine" 00:37:41.671 } 00:37:41.671 ] 00:37:41.671 } 00:37:41.671 ] 00:37:41.671 } 00:37:41.671 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:41.671 fio-3.35 00:37:41.671 Starting 1 thread 00:37:48.245 00:37:48.245 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72710: Tue Nov 26 17:34:24 2024 00:37:48.245 read: IOPS=28.6k, BW=112MiB/s (117MB/s)(559MiB/5001msec) 00:37:48.245 slat (usec): min=4, max=1122, avg= 7.31, stdev= 4.07 00:37:48.245 clat (usec): min=1228, max=3840, avg=1948.26, stdev=301.61 00:37:48.245 lat (usec): min=1233, max=3853, avg=1955.58, stdev=303.29 00:37:48.245 clat percentiles (usec): 00:37:48.245 | 1.00th=[ 1401], 5.00th=[ 1516], 10.00th=[ 1598], 20.00th=[ 1696], 00:37:48.245 | 30.00th=[ 1762], 40.00th=[ 1844], 50.00th=[ 1909], 60.00th=[ 1991], 00:37:48.245 | 70.00th=[ 2073], 80.00th=[ 2212], 90.00th=[ 2376], 95.00th=[ 2507], 00:37:48.245 | 99.00th=[ 2704], 99.50th=[ 2802], 99.90th=[ 3064], 99.95th=[ 3130], 00:37:48.245 | 99.99th=[ 3687] 00:37:48.245 bw ( KiB/s): min=104960, max=132096, per=99.40%, avg=113777.78, stdev=8616.13, samples=9 00:37:48.245 iops : min=26240, max=33024, avg=28444.44, stdev=2154.03, samples=9 00:37:48.245 lat (msec) : 2=62.16%, 4=37.84% 00:37:48.245 cpu : usr=46.14%, sys=50.38%, ctx=12, majf=0, minf=762 00:37:48.245 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:37:48.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.246 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:37:48.246 issued rwts: total=143104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:48.246 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:48.246 00:37:48.246 Run status group 0 (all jobs): 00:37:48.246 READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=559MiB (586MB), run=5001-5001msec 00:37:49.184 ----------------------------------------------------- 00:37:49.184 Suppressions used: 00:37:49.184 count bytes template 00:37:49.184 1 11 /usr/src/fio/parse.c 00:37:49.184 1 8 libtcmalloc_minimal.so 00:37:49.184 1 904 libcrypto.so 00:37:49.184 ----------------------------------------------------- 00:37:49.184 00:37:49.184 17:34:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:49.184 17:34:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:49.184 17:34:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:49.184 17:34:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:49.185 { 00:37:49.185 "subsystems": [ 00:37:49.185 { 00:37:49.185 "subsystem": "bdev", 00:37:49.185 "config": [ 00:37:49.185 { 00:37:49.185 "params": { 00:37:49.185 "io_mechanism": "io_uring", 00:37:49.185 "conserve_cpu": true, 00:37:49.185 "filename": "/dev/nvme0n1", 00:37:49.185 "name": "xnvme_bdev" 00:37:49.185 }, 00:37:49.185 "method": "bdev_xnvme_create" 00:37:49.185 }, 00:37:49.185 { 00:37:49.185 "method": "bdev_wait_for_examine" 00:37:49.185 } 00:37:49.185 ] 00:37:49.185 } 00:37:49.185 ] 00:37:49.185 } 00:37:49.185 17:34:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:49.185 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:49.185 fio-3.35 00:37:49.185 Starting 1 thread 00:37:55.755 00:37:55.756 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72808: Tue Nov 26 17:34:32 2024 00:37:55.756 write: IOPS=30.5k, BW=119MiB/s (125MB/s)(595MiB/5002msec); 0 zone resets 00:37:55.756 slat (usec): min=2, max=100, avg= 6.97, stdev= 3.20 00:37:55.756 clat (usec): min=828, max=3219, avg=1825.69, stdev=421.73 00:37:55.756 lat (usec): min=833, max=3267, avg=1832.66, stdev=423.85 00:37:55.756 clat percentiles (usec): 00:37:55.756 | 1.00th=[ 1012], 5.00th=[ 1123], 10.00th=[ 1221], 20.00th=[ 1434], 00:37:55.756 | 30.00th=[ 1598], 40.00th=[ 1729], 50.00th=[ 1844], 60.00th=[ 1942], 00:37:55.756 | 70.00th=[ 2057], 80.00th=[ 2212], 90.00th=[ 2376], 95.00th=[ 2540], 00:37:55.756 | 99.00th=[ 2704], 99.50th=[ 2769], 99.90th=[ 2868], 99.95th=[ 2933], 00:37:55.756 | 99.99th=[ 3097] 00:37:55.756 bw ( KiB/s): min=93184, max=151040, per=98.17%, avg=119664.56, stdev=18420.85, samples=9 00:37:55.756 iops : min=23296, max=37760, avg=29916.56, stdev=4605.39, samples=9 00:37:55.756 lat (usec) : 1000=0.80% 00:37:55.756 lat (msec) : 2=63.87%, 4=35.33% 00:37:55.756 cpu : usr=48.29%, sys=48.43%, ctx=22, majf=0, minf=762 00:37:55.756 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:37:55.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.756 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:37:55.756 issued rwts: total=0,152434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:55.756 00:37:55.756 Run status group 0 (all jobs): 00:37:55.756 WRITE: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=595MiB (624MB), run=5002-5002msec 00:37:56.692 ----------------------------------------------------- 00:37:56.692 Suppressions used: 00:37:56.692 count bytes template 00:37:56.692 1 11 /usr/src/fio/parse.c 00:37:56.692 1 8 libtcmalloc_minimal.so 00:37:56.692 1 904 libcrypto.so 00:37:56.692 ----------------------------------------------------- 00:37:56.692 00:37:56.692 00:37:56.692 real 0m14.978s 00:37:56.692 user 0m8.646s 00:37:56.692 sys 0m5.744s 00:37:56.692 17:34:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.692 17:34:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:56.692 ************************************ 00:37:56.692 END TEST xnvme_fio_plugin 00:37:56.692 ************************************ 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:37:56.692 17:34:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:56.692 17:34:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:56.692 17:34:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.692 17:34:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:56.692 ************************************ 00:37:56.692 START TEST xnvme_rpc 00:37:56.692 ************************************ 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72891 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72891 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72891 ']' 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.692 17:34:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:56.692 [2024-11-26 17:34:33.982752] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:56.692 [2024-11-26 17:34:33.982863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72891 ] 00:37:56.951 [2024-11-26 17:34:34.149243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.951 [2024-11-26 17:34:34.302121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.330 xnvme_bdev 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72891 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72891 ']' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72891 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72891 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:58.330 killing process with pid 72891 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72891' 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72891 00:37:58.330 17:34:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72891 00:38:01.628 00:38:01.628 real 0m4.785s 00:38:01.628 user 0m4.699s 00:38:01.628 sys 0m0.677s 00:38:01.628 17:34:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.628 17:34:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:01.628 ************************************ 00:38:01.628 END TEST xnvme_rpc 00:38:01.628 ************************************ 00:38:01.628 17:34:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:38:01.628 17:34:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:01.628 17:34:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.628 17:34:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:01.628 ************************************ 00:38:01.628 START TEST xnvme_bdevperf 00:38:01.628 ************************************ 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:01.628 17:34:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.628 { 00:38:01.628 "subsystems": [ 00:38:01.628 { 00:38:01.628 "subsystem": "bdev", 00:38:01.628 "config": [ 00:38:01.628 { 00:38:01.628 "params": { 00:38:01.628 "io_mechanism": "io_uring_cmd", 00:38:01.628 "conserve_cpu": false, 00:38:01.628 "filename": "/dev/ng0n1", 00:38:01.628 "name": "xnvme_bdev" 00:38:01.629 }, 00:38:01.629 "method": "bdev_xnvme_create" 00:38:01.629 }, 00:38:01.629 { 00:38:01.629 "method": "bdev_wait_for_examine" 00:38:01.629 } 00:38:01.629 ] 00:38:01.629 } 00:38:01.629 ] 00:38:01.629 } 00:38:01.629 [2024-11-26 17:34:38.837139] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:01.629 [2024-11-26 17:34:38.837269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72986 ] 00:38:01.629 [2024-11-26 17:34:39.014025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.890 [2024-11-26 17:34:39.162334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.460 Running I/O for 5 seconds... 00:38:04.330 38252.00 IOPS, 149.42 MiB/s [2024-11-26T17:34:42.710Z] 40142.00 IOPS, 156.80 MiB/s [2024-11-26T17:34:43.647Z] 37585.33 IOPS, 146.82 MiB/s [2024-11-26T17:34:45.022Z] 37227.00 IOPS, 145.42 MiB/s 00:38:07.576 Latency(us) 00:38:07.576 [2024-11-26T17:34:45.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.576 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:38:07.576 xnvme_bdev : 5.00 36183.09 141.34 0.00 0.00 1762.65 747.65 7955.90 00:38:07.576 [2024-11-26T17:34:45.022Z] =================================================================================================================== 00:38:07.576 [2024-11-26T17:34:45.022Z] Total : 36183.09 141.34 0.00 0.00 1762.65 747.65 7955.90 00:38:08.511 17:34:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:08.511 17:34:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:38:08.511 17:34:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:08.511 17:34:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:08.511 17:34:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.511 { 00:38:08.511 "subsystems": [ 00:38:08.511 { 00:38:08.511 "subsystem": "bdev", 00:38:08.511 "config": [ 00:38:08.511 { 00:38:08.511 "params": { 00:38:08.511 "io_mechanism": "io_uring_cmd", 00:38:08.511 "conserve_cpu": false, 00:38:08.511 "filename": "/dev/ng0n1", 00:38:08.511 "name": "xnvme_bdev" 00:38:08.511 }, 00:38:08.511 "method": "bdev_xnvme_create" 00:38:08.511 }, 00:38:08.511 { 00:38:08.511 "method": "bdev_wait_for_examine" 00:38:08.511 } 00:38:08.511 ] 00:38:08.511 } 00:38:08.511 ] 00:38:08.511 } 00:38:08.770 [2024-11-26 17:34:45.987583] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:08.770 [2024-11-26 17:34:45.987783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73066 ] 00:38:08.770 [2024-11-26 17:34:46.177288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.028 [2024-11-26 17:34:46.329822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.594 Running I/O for 5 seconds... 00:38:11.470 36254.00 IOPS, 141.62 MiB/s [2024-11-26T17:34:49.850Z] 36120.50 IOPS, 141.10 MiB/s [2024-11-26T17:34:51.226Z] 35386.67 IOPS, 138.23 MiB/s [2024-11-26T17:34:52.163Z] 35989.75 IOPS, 140.58 MiB/s [2024-11-26T17:34:52.163Z] 34428.60 IOPS, 134.49 MiB/s 00:38:14.717 Latency(us) 00:38:14.717 [2024-11-26T17:34:52.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.717 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:38:14.717 xnvme_bdev : 5.01 34366.39 134.24 0.00 0.00 1855.03 84.96 8642.74 00:38:14.717 [2024-11-26T17:34:52.163Z] =================================================================================================================== 00:38:14.717 [2024-11-26T17:34:52.163Z] Total : 34366.39 134.24 0.00 0.00 1855.03 84.96 8642.74 00:38:16.096 17:34:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:16.096 17:34:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:16.096 17:34:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:38:16.096 17:34:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:16.096 17:34:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:16.096 { 00:38:16.096 "subsystems": [ 00:38:16.096 { 00:38:16.096 "subsystem": "bdev", 00:38:16.096 "config": [ 00:38:16.096 { 00:38:16.096 "params": { 00:38:16.096 "io_mechanism": "io_uring_cmd", 00:38:16.096 "conserve_cpu": false, 00:38:16.097 "filename": "/dev/ng0n1", 00:38:16.097 "name": "xnvme_bdev" 00:38:16.097 }, 00:38:16.097 "method": "bdev_xnvme_create" 00:38:16.097 }, 00:38:16.097 { 00:38:16.097 "method": "bdev_wait_for_examine" 00:38:16.097 } 00:38:16.097 ] 00:38:16.097 } 00:38:16.097 ] 00:38:16.097 } 00:38:16.097 [2024-11-26 17:34:53.204400] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:16.097 [2024-11-26 17:34:53.204548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73146 ] 00:38:16.097 [2024-11-26 17:34:53.387835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.097 [2024-11-26 17:34:53.532958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.670 Running I/O for 5 seconds... 00:38:18.542 81728.00 IOPS, 319.25 MiB/s [2024-11-26T17:34:57.364Z] 80448.00 IOPS, 314.25 MiB/s [2024-11-26T17:34:58.006Z] 75221.33 IOPS, 293.83 MiB/s [2024-11-26T17:34:58.952Z] 73328.00 IOPS, 286.44 MiB/s 00:38:21.507 Latency(us) 00:38:21.507 [2024-11-26T17:34:58.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.507 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:38:21.507 xnvme_bdev : 5.00 73861.34 288.52 0.00 0.00 863.16 457.89 3276.80 00:38:21.507 [2024-11-26T17:34:58.953Z] =================================================================================================================== 00:38:21.507 [2024-11-26T17:34:58.953Z] Total : 73861.34 288.52 0.00 0.00 863.16 457.89 3276.80 00:38:23.405 17:35:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:23.405 17:35:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:38:23.405 17:35:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:23.405 17:35:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:23.405 17:35:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:23.405 { 00:38:23.405 "subsystems": [ 00:38:23.405 { 00:38:23.405 "subsystem": "bdev", 00:38:23.405 "config": [ 00:38:23.405 { 00:38:23.405 "params": { 00:38:23.405 "io_mechanism": "io_uring_cmd", 00:38:23.405 "conserve_cpu": false, 00:38:23.405 "filename": "/dev/ng0n1", 00:38:23.405 "name": "xnvme_bdev" 00:38:23.405 }, 00:38:23.405 "method": "bdev_xnvme_create" 00:38:23.405 }, 00:38:23.405 { 00:38:23.405 "method": "bdev_wait_for_examine" 00:38:23.405 } 00:38:23.405 ] 00:38:23.405 } 00:38:23.405 ] 00:38:23.405 } 00:38:23.405 [2024-11-26 17:35:00.523584] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:23.405 [2024-11-26 17:35:00.523753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73231 ] 00:38:23.405 [2024-11-26 17:35:00.709145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.662 [2024-11-26 17:35:00.865640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.921 Running I/O for 5 seconds... 00:38:25.962 31159.00 IOPS, 121.71 MiB/s [2024-11-26T17:35:04.344Z] 31943.50 IOPS, 124.78 MiB/s [2024-11-26T17:35:05.723Z] 31652.33 IOPS, 123.64 MiB/s [2024-11-26T17:35:06.661Z] 30849.75 IOPS, 120.51 MiB/s [2024-11-26T17:35:06.661Z] 30911.20 IOPS, 120.75 MiB/s 00:38:29.215 Latency(us) 00:38:29.215 [2024-11-26T17:35:06.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.215 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:38:29.215 xnvme_bdev : 5.00 30886.76 120.65 0.00 0.00 2064.69 53.21 17628.90 00:38:29.215 [2024-11-26T17:35:06.661Z] =================================================================================================================== 00:38:29.215 [2024-11-26T17:35:06.661Z] Total : 30886.76 120.65 0.00 0.00 2064.69 53.21 17628.90 00:38:30.591 00:38:30.591 real 0m28.957s 00:38:30.591 user 0m16.783s 00:38:30.591 sys 0m11.730s 00:38:30.591 17:35:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.591 17:35:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:30.591 ************************************ 00:38:30.591 END TEST xnvme_bdevperf 00:38:30.591 ************************************ 00:38:30.591 17:35:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:38:30.591 17:35:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.591 17:35:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.591 17:35:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:30.591 ************************************ 00:38:30.591 START TEST xnvme_fio_plugin 00:38:30.591 ************************************ 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:30.591 17:35:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:30.591 { 00:38:30.591 "subsystems": [ 00:38:30.591 { 00:38:30.591 "subsystem": "bdev", 00:38:30.591 "config": [ 00:38:30.591 { 00:38:30.591 "params": { 00:38:30.591 "io_mechanism": "io_uring_cmd", 00:38:30.591 "conserve_cpu": false, 00:38:30.591 "filename": "/dev/ng0n1", 00:38:30.591 "name": "xnvme_bdev" 00:38:30.591 }, 00:38:30.591 "method": "bdev_xnvme_create" 00:38:30.591 }, 00:38:30.591 { 00:38:30.591 "method": "bdev_wait_for_examine" 00:38:30.591 } 00:38:30.591 ] 00:38:30.591 } 00:38:30.591 ] 00:38:30.591 } 00:38:30.591 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:30.591 fio-3.35 00:38:30.591 Starting 1 thread 00:38:37.165 00:38:37.165 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73355: Tue Nov 26 17:35:13 2024 00:38:37.165 read: IOPS=37.1k, BW=145MiB/s (152MB/s)(726MiB/5002msec) 00:38:37.165 slat (nsec): min=2651, max=99135, avg=5750.64, stdev=2882.25 00:38:37.165 clat (usec): min=690, max=3258, avg=1496.39, stdev=441.74 00:38:37.165 lat (usec): min=694, max=3271, avg=1502.14, stdev=443.74 00:38:37.165 clat percentiles (usec): 00:38:37.165 | 1.00th=[ 857], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1057], 00:38:37.165 | 30.00th=[ 1139], 40.00th=[ 1270], 50.00th=[ 1434], 60.00th=[ 1614], 00:38:37.165 | 70.00th=[ 1762], 80.00th=[ 1926], 90.00th=[ 2114], 95.00th=[ 2278], 00:38:37.165 | 99.00th=[ 2540], 99.50th=[ 2606], 99.90th=[ 2737], 99.95th=[ 2835], 00:38:37.165 | 99.99th=[ 3032] 00:38:37.165 bw ( KiB/s): min=117248, max=190464, per=99.46%, avg=147769.89, stdev=24237.24, samples=9 00:38:37.165 iops : min=29312, max=47616, avg=36942.67, stdev=6059.57, samples=9 00:38:37.165 lat (usec) : 750=0.08%, 1000=12.58% 00:38:37.165 lat (msec) : 2=71.72%, 4=15.61% 00:38:37.165 cpu : usr=40.97%, sys=57.97%, ctx=10, majf=0, minf=762 00:38:37.165 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:38:37.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.165 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:38:37.165 issued rwts: total=185792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.165 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:37.165 00:38:37.165 Run status group 0 (all jobs): 00:38:37.165 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=726MiB (761MB), run=5002-5002msec 00:38:38.101 ----------------------------------------------------- 00:38:38.101 Suppressions used: 00:38:38.101 count bytes template 00:38:38.101 1 11 /usr/src/fio/parse.c 00:38:38.101 1 8 libtcmalloc_minimal.so 00:38:38.101 1 904 libcrypto.so 00:38:38.101 ----------------------------------------------------- 00:38:38.101 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:38.101 17:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:38.101 { 00:38:38.101 "subsystems": [ 00:38:38.101 { 00:38:38.101 "subsystem": "bdev", 00:38:38.101 "config": [ 00:38:38.101 { 00:38:38.101 "params": { 00:38:38.101 "io_mechanism": "io_uring_cmd", 00:38:38.101 "conserve_cpu": false, 00:38:38.101 "filename": "/dev/ng0n1", 00:38:38.101 "name": "xnvme_bdev" 00:38:38.101 }, 00:38:38.101 "method": "bdev_xnvme_create" 00:38:38.101 }, 00:38:38.101 { 00:38:38.101 "method": "bdev_wait_for_examine" 00:38:38.101 } 00:38:38.101 ] 00:38:38.101 } 00:38:38.101 ] 00:38:38.101 } 00:38:38.361 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:38.361 fio-3.35 00:38:38.361 Starting 1 thread 00:38:44.925 00:38:44.925 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73452: Tue Nov 26 17:35:21 2024 00:38:44.925 write: IOPS=37.4k, BW=146MiB/s (153MB/s)(731MiB/5003msec); 0 zone resets 00:38:44.925 slat (usec): min=2, max=295, avg= 5.74, stdev= 3.43 00:38:44.925 clat (usec): min=365, max=4891, avg=1481.83, stdev=449.95 00:38:44.925 lat (usec): min=368, max=4895, avg=1487.57, stdev=452.14 00:38:44.925 clat percentiles (usec): 00:38:44.925 | 1.00th=[ 848], 5.00th=[ 938], 10.00th=[ 996], 20.00th=[ 1074], 00:38:44.925 | 30.00th=[ 1156], 40.00th=[ 1237], 50.00th=[ 1369], 60.00th=[ 1532], 00:38:44.925 | 70.00th=[ 1713], 80.00th=[ 1909], 90.00th=[ 2147], 95.00th=[ 2311], 00:38:44.925 | 99.00th=[ 2638], 99.50th=[ 2737], 99.90th=[ 3130], 99.95th=[ 3654], 00:38:44.925 | 99.99th=[ 4015] 00:38:44.925 bw ( KiB/s): min=131072, max=173568, per=100.00%, avg=155131.56, stdev=16568.01, samples=9 00:38:44.925 iops : min=32768, max=43392, avg=38782.89, stdev=4142.00, samples=9 00:38:44.925 lat (usec) : 500=0.02%, 750=0.11%, 1000=10.41% 00:38:44.925 lat (msec) : 2=73.96%, 4=15.49%, 10=0.01% 00:38:44.925 cpu : usr=43.92%, sys=54.80%, ctx=9, majf=0, minf=762 00:38:44.925 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:38:44.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.925 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:38:44.925 issued rwts: total=0,187163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.925 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:44.925 00:38:44.925 Run status group 0 (all jobs): 00:38:44.925 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=731MiB (767MB), run=5003-5003msec 00:38:45.862 ----------------------------------------------------- 00:38:45.862 Suppressions used: 00:38:45.862 count bytes template 00:38:45.862 1 11 /usr/src/fio/parse.c 00:38:45.862 1 8 libtcmalloc_minimal.so 00:38:45.862 1 904 libcrypto.so 00:38:45.862 ----------------------------------------------------- 00:38:45.862 00:38:45.862 00:38:45.862 real 0m15.386s 00:38:45.862 user 0m8.596s 00:38:45.862 sys 0m6.422s 00:38:45.862 17:35:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.862 17:35:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:45.862 ************************************ 00:38:45.862 END TEST xnvme_fio_plugin 00:38:45.862 ************************************ 00:38:45.862 17:35:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:38:45.862 17:35:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:38:45.862 17:35:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:38:45.862 17:35:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:38:45.862 17:35:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:45.862 17:35:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.862 17:35:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:45.862 ************************************ 00:38:45.862 START TEST xnvme_rpc 00:38:45.862 ************************************ 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73537 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73537 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73537 ']' 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.862 17:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:46.121 [2024-11-26 17:35:23.335541] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:46.121 [2024-11-26 17:35:23.335722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73537 ] 00:38:46.121 [2024-11-26 17:35:23.510461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.379 [2024-11-26 17:35:23.670874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:47.798 xnvme_bdev 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:47.798 17:35:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73537 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73537 ']' 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73537 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73537 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:47.798 killing process with pid 73537 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73537' 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73537 00:38:47.798 17:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73537 00:38:51.086 00:38:51.086 real 0m4.783s 00:38:51.086 user 0m4.736s 00:38:51.086 sys 0m0.722s 00:38:51.086 17:35:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.086 17:35:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:51.086 ************************************ 00:38:51.086 END TEST xnvme_rpc 00:38:51.086 ************************************ 00:38:51.086 17:35:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:38:51.086 17:35:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:51.086 17:35:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.086 17:35:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:51.086 ************************************ 00:38:51.086 START TEST xnvme_bdevperf 00:38:51.086 ************************************ 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:51.086 17:35:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.086 { 00:38:51.086 "subsystems": [ 00:38:51.086 { 00:38:51.086 "subsystem": "bdev", 00:38:51.086 "config": [ 00:38:51.086 { 00:38:51.086 "params": { 00:38:51.086 "io_mechanism": "io_uring_cmd", 00:38:51.086 "conserve_cpu": true, 00:38:51.086 "filename": "/dev/ng0n1", 00:38:51.086 "name": "xnvme_bdev" 00:38:51.086 }, 00:38:51.086 "method": "bdev_xnvme_create" 00:38:51.086 }, 00:38:51.086 { 00:38:51.086 "method": "bdev_wait_for_examine" 00:38:51.086 } 00:38:51.086 ] 00:38:51.086 } 00:38:51.086 ] 00:38:51.086 } 00:38:51.086 [2024-11-26 17:35:28.129407] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:51.086 [2024-11-26 17:35:28.129554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73628 ] 00:38:51.086 [2024-11-26 17:35:28.301883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.086 [2024-11-26 17:35:28.455709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.718 Running I/O for 5 seconds... 00:38:53.644 31936.00 IOPS, 124.75 MiB/s [2024-11-26T17:35:32.028Z] 29440.00 IOPS, 115.00 MiB/s [2024-11-26T17:35:32.965Z] 28544.00 IOPS, 111.50 MiB/s [2024-11-26T17:35:33.905Z] 29424.00 IOPS, 114.94 MiB/s 00:38:56.459 Latency(us) 00:38:56.459 [2024-11-26T17:35:33.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.459 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:38:56.459 xnvme_bdev : 5.00 29036.40 113.42 0.00 0.00 2195.84 790.58 9615.76 00:38:56.459 [2024-11-26T17:35:33.905Z] =================================================================================================================== 00:38:56.459 [2024-11-26T17:35:33.905Z] Total : 29036.40 113.42 0.00 0.00 2195.84 790.58 9615.76 00:38:57.836 17:35:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:57.836 17:35:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:57.836 17:35:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:38:57.836 17:35:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:57.836 17:35:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:57.836 { 00:38:57.836 "subsystems": [ 00:38:57.836 { 00:38:57.836 "subsystem": "bdev", 00:38:57.836 "config": [ 00:38:57.836 { 00:38:57.836 "params": { 00:38:57.836 "io_mechanism": "io_uring_cmd", 00:38:57.836 "conserve_cpu": true, 00:38:57.836 "filename": "/dev/ng0n1", 00:38:57.836 "name": "xnvme_bdev" 00:38:57.836 }, 00:38:57.836 "method": "bdev_xnvme_create" 00:38:57.836 }, 00:38:57.836 { 00:38:57.836 "method": "bdev_wait_for_examine" 00:38:57.836 } 00:38:57.836 ] 00:38:57.836 } 00:38:57.836 ] 00:38:57.836 } 00:38:57.836 [2024-11-26 17:35:35.271629] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:57.836 [2024-11-26 17:35:35.271775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73712 ] 00:38:58.095 [2024-11-26 17:35:35.445481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.354 [2024-11-26 17:35:35.588128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.613 Running I/O for 5 seconds... 00:39:00.929 25133.00 IOPS, 98.18 MiB/s [2024-11-26T17:35:39.312Z] 21051.50 IOPS, 82.23 MiB/s [2024-11-26T17:35:40.247Z] 23549.00 IOPS, 91.99 MiB/s [2024-11-26T17:35:41.184Z] 24957.75 IOPS, 97.49 MiB/s [2024-11-26T17:35:41.184Z] 26455.80 IOPS, 103.34 MiB/s 00:39:03.738 Latency(us) 00:39:03.738 [2024-11-26T17:35:41.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.738 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:39:03.738 xnvme_bdev : 5.01 26403.13 103.14 0.00 0.00 2416.34 46.95 19346.00 00:39:03.738 [2024-11-26T17:35:41.184Z] =================================================================================================================== 00:39:03.738 [2024-11-26T17:35:41.184Z] Total : 26403.13 103.14 0.00 0.00 2416.34 46.95 19346.00 00:39:05.114 17:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:05.114 17:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:39:05.114 17:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:05.114 17:35:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:05.114 17:35:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:05.114 { 00:39:05.114 "subsystems": [ 00:39:05.114 { 00:39:05.114 "subsystem": "bdev", 00:39:05.114 "config": [ 00:39:05.114 { 00:39:05.114 "params": { 00:39:05.114 "io_mechanism": "io_uring_cmd", 00:39:05.114 "conserve_cpu": true, 00:39:05.114 "filename": "/dev/ng0n1", 00:39:05.114 "name": "xnvme_bdev" 00:39:05.114 }, 00:39:05.114 "method": "bdev_xnvme_create" 00:39:05.114 }, 00:39:05.114 { 00:39:05.114 "method": "bdev_wait_for_examine" 00:39:05.114 } 00:39:05.114 ] 00:39:05.114 } 00:39:05.114 ] 00:39:05.114 } 00:39:05.114 [2024-11-26 17:35:42.394884] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:05.114 [2024-11-26 17:35:42.395030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73793 ] 00:39:05.373 [2024-11-26 17:35:42.572947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.373 [2024-11-26 17:35:42.715024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.940 Running I/O for 5 seconds... 00:39:07.811 76160.00 IOPS, 297.50 MiB/s [2024-11-26T17:35:46.202Z] 76128.00 IOPS, 297.38 MiB/s [2024-11-26T17:35:47.576Z] 75562.67 IOPS, 295.17 MiB/s [2024-11-26T17:35:48.514Z] 75232.00 IOPS, 293.88 MiB/s 00:39:11.068 Latency(us) 00:39:11.068 [2024-11-26T17:35:48.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.068 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:39:11.068 xnvme_bdev : 5.00 75299.45 294.14 0.00 0.00 847.05 568.79 2346.70 00:39:11.068 [2024-11-26T17:35:48.515Z] =================================================================================================================== 00:39:11.069 [2024-11-26T17:35:48.515Z] Total : 75299.45 294.14 0.00 0.00 847.05 568.79 2346.70 00:39:12.042 17:35:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:12.042 17:35:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:39:12.042 17:35:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:12.042 17:35:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:12.042 17:35:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:12.042 { 00:39:12.042 "subsystems": [ 00:39:12.042 { 00:39:12.042 "subsystem": "bdev", 00:39:12.042 "config": [ 00:39:12.042 { 00:39:12.042 "params": { 00:39:12.042 "io_mechanism": "io_uring_cmd", 00:39:12.042 "conserve_cpu": true, 00:39:12.042 "filename": "/dev/ng0n1", 00:39:12.042 "name": "xnvme_bdev" 00:39:12.042 }, 00:39:12.042 "method": "bdev_xnvme_create" 00:39:12.042 }, 00:39:12.042 { 00:39:12.042 "method": "bdev_wait_for_examine" 00:39:12.042 } 00:39:12.042 ] 00:39:12.042 } 00:39:12.042 ] 00:39:12.042 } 00:39:12.301 [2024-11-26 17:35:49.510840] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:12.301 [2024-11-26 17:35:49.510972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73874 ] 00:39:12.301 [2024-11-26 17:35:49.689600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.560 [2024-11-26 17:35:49.837331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.820 Running I/O for 5 seconds... 00:39:15.128 62667.00 IOPS, 244.79 MiB/s [2024-11-26T17:35:53.507Z] 52911.00 IOPS, 206.68 MiB/s [2024-11-26T17:35:54.442Z] 49695.67 IOPS, 194.12 MiB/s [2024-11-26T17:35:55.378Z] 47267.50 IOPS, 184.64 MiB/s [2024-11-26T17:35:55.378Z] 44579.20 IOPS, 174.14 MiB/s 00:39:17.932 Latency(us) 00:39:17.932 [2024-11-26T17:35:55.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.932 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:39:17.932 xnvme_bdev : 5.00 44545.54 174.01 0.00 0.00 1429.61 63.50 13736.80 00:39:17.932 [2024-11-26T17:35:55.378Z] =================================================================================================================== 00:39:17.932 [2024-11-26T17:35:55.378Z] Total : 44545.54 174.01 0.00 0.00 1429.61 63.50 13736.80 00:39:19.313 00:39:19.313 real 0m28.545s 00:39:19.313 user 0m19.288s 00:39:19.313 sys 0m7.457s 00:39:19.313 17:35:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.313 17:35:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:19.313 ************************************ 00:39:19.313 END TEST xnvme_bdevperf 00:39:19.313 ************************************ 00:39:19.313 17:35:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:39:19.313 17:35:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:19.313 17:35:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:19.313 17:35:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:19.313 ************************************ 00:39:19.313 START TEST xnvme_fio_plugin 00:39:19.313 ************************************ 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:19.313 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:19.313 { 00:39:19.313 "subsystems": [ 00:39:19.313 { 00:39:19.313 "subsystem": "bdev", 00:39:19.313 "config": [ 00:39:19.313 { 00:39:19.313 "params": { 00:39:19.313 "io_mechanism": "io_uring_cmd", 00:39:19.313 "conserve_cpu": true, 00:39:19.313 "filename": "/dev/ng0n1", 00:39:19.314 "name": "xnvme_bdev" 00:39:19.314 }, 00:39:19.314 "method": "bdev_xnvme_create" 00:39:19.314 }, 00:39:19.314 { 00:39:19.314 "method": "bdev_wait_for_examine" 00:39:19.314 } 00:39:19.314 ] 00:39:19.314 } 00:39:19.314 ] 00:39:19.314 } 00:39:19.314 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:19.314 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:19.314 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:19.314 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:19.314 17:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:19.573 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:19.573 fio-3.35 00:39:19.573 Starting 1 thread 00:39:26.143 00:39:26.143 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73998: Tue Nov 26 17:36:02 2024 00:39:26.143 read: IOPS=27.1k, BW=106MiB/s (111MB/s)(530MiB/5002msec) 00:39:26.143 slat (usec): min=3, max=125, avg= 8.47, stdev= 3.17 00:39:26.143 clat (usec): min=1059, max=5365, avg=2028.80, stdev=339.06 00:39:26.143 lat (usec): min=1063, max=5376, avg=2037.27, stdev=340.81 00:39:26.143 clat percentiles (usec): 00:39:26.143 | 1.00th=[ 1254], 5.00th=[ 1401], 10.00th=[ 1549], 20.00th=[ 1745], 00:39:26.143 | 30.00th=[ 1876], 40.00th=[ 1958], 50.00th=[ 2040], 60.00th=[ 2114], 00:39:26.143 | 70.00th=[ 2212], 80.00th=[ 2343], 90.00th=[ 2474], 95.00th=[ 2573], 00:39:26.143 | 99.00th=[ 2704], 99.50th=[ 2737], 99.90th=[ 2900], 99.95th=[ 2933], 00:39:26.143 | 99.99th=[ 3064] 00:39:26.143 bw ( KiB/s): min=97596, max=125952, per=100.00%, avg=108692.89, stdev=11011.67, samples=9 00:39:26.143 iops : min=24399, max=31488, avg=27173.22, stdev=2752.92, samples=9 00:39:26.143 lat (msec) : 2=45.01%, 4=54.99%, 10=0.01% 00:39:26.143 cpu : usr=48.91%, sys=48.23%, ctx=12, majf=0, minf=762 00:39:26.143 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:39:26.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.144 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:39:26.144 issued rwts: total=135620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.144 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:26.144 00:39:26.144 Run status group 0 (all jobs): 00:39:26.144 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=530MiB (555MB), run=5002-5002msec 00:39:27.077 ----------------------------------------------------- 00:39:27.077 Suppressions used: 00:39:27.077 count bytes template 00:39:27.077 1 11 /usr/src/fio/parse.c 00:39:27.077 1 8 libtcmalloc_minimal.so 00:39:27.078 1 904 libcrypto.so 00:39:27.078 ----------------------------------------------------- 00:39:27.078 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:27.078 17:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:27.078 { 00:39:27.078 "subsystems": [ 00:39:27.078 { 00:39:27.078 "subsystem": "bdev", 00:39:27.078 "config": [ 00:39:27.078 { 00:39:27.078 "params": { 00:39:27.078 "io_mechanism": "io_uring_cmd", 00:39:27.078 "conserve_cpu": true, 00:39:27.078 "filename": "/dev/ng0n1", 00:39:27.078 "name": "xnvme_bdev" 00:39:27.078 }, 00:39:27.078 "method": "bdev_xnvme_create" 00:39:27.078 }, 00:39:27.078 { 00:39:27.078 "method": "bdev_wait_for_examine" 00:39:27.078 } 00:39:27.078 ] 00:39:27.078 } 00:39:27.078 ] 00:39:27.078 } 00:39:27.337 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:27.337 fio-3.35 00:39:27.337 Starting 1 thread 00:39:33.909 00:39:33.909 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74090: Tue Nov 26 17:36:10 2024 00:39:33.909 write: IOPS=25.6k, BW=100MiB/s (105MB/s)(501MiB/5008msec); 0 zone resets 00:39:33.909 slat (usec): min=2, max=270, avg= 7.03, stdev= 4.22 00:39:33.909 clat (usec): min=73, max=22151, avg=2237.83, stdev=2285.54 00:39:33.909 lat (usec): min=77, max=22155, avg=2244.86, stdev=2285.51 00:39:33.909 clat percentiles (usec): 00:39:33.909 | 1.00th=[ 200], 5.00th=[ 873], 10.00th=[ 988], 20.00th=[ 1123], 00:39:33.909 | 30.00th=[ 1336], 40.00th=[ 1696], 50.00th=[ 1876], 60.00th=[ 2008], 00:39:33.910 | 70.00th=[ 2147], 80.00th=[ 2311], 90.00th=[ 2573], 95.00th=[ 6915], 00:39:33.910 | 99.00th=[13829], 99.50th=[15401], 99.90th=[18220], 99.95th=[19268], 00:39:33.910 | 99.99th=[20579] 00:39:33.910 bw ( KiB/s): min=40936, max=156160, per=100.00%, avg=102629.60, stdev=38152.46, samples=10 00:39:33.910 iops : min=10234, max=39040, avg=25657.40, stdev=9538.12, samples=10 00:39:33.910 lat (usec) : 100=0.03%, 250=1.54%, 500=1.37%, 750=0.61%, 1000=7.09% 00:39:33.910 lat (msec) : 2=48.58%, 4=34.45%, 10=3.37%, 20=2.95%, 50=0.02% 00:39:33.910 cpu : usr=58.08%, sys=37.21%, ctx=24, majf=0, minf=762 00:39:33.910 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.2%, 16=22.6%, 32=52.7%, >=64=3.7% 00:39:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.910 complete : 0=0.0%, 4=98.1%, 8=0.3%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:39:33.910 issued rwts: total=0,128349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.910 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:33.910 00:39:33.910 Run status group 0 (all jobs): 00:39:33.910 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=501MiB (526MB), run=5008-5008msec 00:39:34.477 ----------------------------------------------------- 00:39:34.477 Suppressions used: 00:39:34.477 count bytes template 00:39:34.477 1 11 /usr/src/fio/parse.c 00:39:34.477 1 8 libtcmalloc_minimal.so 00:39:34.477 1 904 libcrypto.so 00:39:34.477 ----------------------------------------------------- 00:39:34.477 00:39:34.735 00:39:34.735 real 0m15.293s 00:39:34.735 user 0m9.517s 00:39:34.735 sys 0m5.134s 00:39:34.735 17:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.735 17:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:34.735 ************************************ 00:39:34.735 END TEST xnvme_fio_plugin 00:39:34.735 ************************************ 00:39:34.735 17:36:11 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73537 00:39:34.735 17:36:11 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73537 ']' 00:39:34.735 17:36:11 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73537 00:39:34.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73537) - No such process 00:39:34.735 Process with pid 73537 is not found 00:39:34.735 17:36:11 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73537 is not found' 00:39:34.735 00:39:34.735 real 4m4.573s 00:39:34.735 user 2m22.660s 00:39:34.735 sys 1m26.534s 00:39:34.736 17:36:11 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.736 17:36:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 ************************************ 00:39:34.736 END TEST nvme_xnvme 00:39:34.736 ************************************ 00:39:34.736 17:36:12 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:39:34.736 17:36:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:34.736 17:36:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.736 17:36:12 -- common/autotest_common.sh@10 -- # set +x 00:39:34.736 ************************************ 00:39:34.736 START TEST blockdev_xnvme 00:39:34.736 ************************************ 00:39:34.736 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:39:34.736 * Looking for test storage... 00:39:34.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:34.736 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:34.736 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:39:34.736 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:34.994 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.994 17:36:12 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:39:34.994 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.994 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.994 --rc genhtml_branch_coverage=1 00:39:34.994 --rc genhtml_function_coverage=1 00:39:34.994 --rc genhtml_legend=1 00:39:34.994 --rc geninfo_all_blocks=1 00:39:34.994 --rc geninfo_unexecuted_blocks=1 00:39:34.994 00:39:34.994 ' 00:39:34.994 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.994 --rc genhtml_branch_coverage=1 00:39:34.994 --rc genhtml_function_coverage=1 00:39:34.994 --rc genhtml_legend=1 00:39:34.994 --rc geninfo_all_blocks=1 00:39:34.994 --rc geninfo_unexecuted_blocks=1 00:39:34.994 00:39:34.994 ' 00:39:34.994 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.994 --rc genhtml_branch_coverage=1 00:39:34.994 --rc genhtml_function_coverage=1 00:39:34.994 --rc genhtml_legend=1 00:39:34.994 --rc geninfo_all_blocks=1 00:39:34.994 --rc geninfo_unexecuted_blocks=1 00:39:34.994 00:39:34.994 ' 00:39:34.994 17:36:12 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.994 --rc genhtml_branch_coverage=1 00:39:34.994 --rc genhtml_function_coverage=1 00:39:34.994 --rc genhtml_legend=1 00:39:34.995 --rc geninfo_all_blocks=1 00:39:34.995 --rc geninfo_unexecuted_blocks=1 00:39:34.995 00:39:34.995 ' 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74230 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74230 00:39:34.995 17:36:12 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74230 ']' 00:39:34.995 17:36:12 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.995 17:36:12 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.995 17:36:12 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.995 17:36:12 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:34.995 17:36:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:34.995 17:36:12 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:34.995 [2024-11-26 17:36:12.344600] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:34.995 [2024-11-26 17:36:12.344744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74230 ] 00:39:35.253 [2024-11-26 17:36:12.528814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.253 [2024-11-26 17:36:12.672486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.627 17:36:13 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:36.627 17:36:13 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:39:36.627 17:36:13 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:39:36.627 17:36:13 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:39:36.627 17:36:13 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:39:36.627 17:36:13 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:39:36.627 17:36:13 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:36.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:37.450 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:39:37.450 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:39:37.450 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:39:37.709 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:39:37.709 nvme0n1 00:39:37.709 nvme0n2 00:39:37.709 nvme0n3 00:39:37.709 nvme1n1 00:39:37.709 nvme2n1 00:39:37.709 nvme3n1 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.709 17:36:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.709 17:36:14 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:37.709 17:36:15 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.709 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:39:37.710 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "215e04e3-49ca-402c-89e7-38b1ab7509b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "215e04e3-49ca-402c-89e7-38b1ab7509b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dd169eb3-fbaa-44a0-934f-dfdf42cee47c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dd169eb3-fbaa-44a0-934f-dfdf42cee47c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5994a548-a117-409d-b292-589ceffe8f2d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5994a548-a117-409d-b292-589ceffe8f2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f8a4b2e4-8c0f-4ad3-8d92-39e3def1ce25"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f8a4b2e4-8c0f-4ad3-8d92-39e3def1ce25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d7a0740c-bd5f-4ab8-ba2f-2fa380a3bcfa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d7a0740c-bd5f-4ab8-ba2f-2fa380a3bcfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d2933865-e4ca-43b1-8eb4-662115daf357"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d2933865-e4ca-43b1-8eb4-662115daf357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:39:37.710 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:39:37.969 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:39:37.969 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:39:37.969 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:39:37.969 17:36:15 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74230 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74230 ']' 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74230 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74230 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:37.969 killing process with pid 74230 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74230' 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74230 00:39:37.969 17:36:15 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74230 00:39:40.506 17:36:17 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:40.506 17:36:17 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:39:40.506 17:36:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:40.506 17:36:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:40.506 17:36:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:40.506 ************************************ 00:39:40.506 START TEST bdev_hello_world 00:39:40.506 ************************************ 00:39:40.506 17:36:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:39:40.506 [2024-11-26 17:36:17.949392] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:40.506 [2024-11-26 17:36:17.949534] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74525 ] 00:39:40.764 [2024-11-26 17:36:18.131677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.023 [2024-11-26 17:36:18.276061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.591 [2024-11-26 17:36:18.786870] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:41.591 [2024-11-26 17:36:18.786927] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:39:41.591 [2024-11-26 17:36:18.786944] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:41.591 [2024-11-26 17:36:18.789104] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:41.592 [2024-11-26 17:36:18.789491] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:41.592 [2024-11-26 17:36:18.789515] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:41.592 [2024-11-26 17:36:18.789809] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:41.592 00:39:41.592 [2024-11-26 17:36:18.789838] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:42.967 00:39:42.967 real 0m2.174s 00:39:42.967 user 0m1.762s 00:39:42.967 sys 0m0.295s 00:39:42.967 17:36:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:42.967 17:36:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:42.967 ************************************ 00:39:42.967 END TEST bdev_hello_world 00:39:42.967 ************************************ 00:39:42.967 17:36:20 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:39:42.967 17:36:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:42.967 17:36:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:42.967 17:36:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:42.968 ************************************ 00:39:42.968 START TEST bdev_bounds 00:39:42.968 ************************************ 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74572 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:42.968 Process bdevio pid: 74572 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74572' 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74572 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74572 ']' 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:42.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:42.968 17:36:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:42.968 [2024-11-26 17:36:20.206095] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:42.968 [2024-11-26 17:36:20.206244] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74572 ] 00:39:42.968 [2024-11-26 17:36:20.389990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:43.226 [2024-11-26 17:36:20.539657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.226 [2024-11-26 17:36:20.539776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.226 [2024-11-26 17:36:20.539814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:43.795 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:43.795 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:39:43.795 17:36:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:43.795 I/O targets: 00:39:43.795 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:39:43.795 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:39:43.795 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:39:43.795 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:39:43.795 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:39:43.795 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:39:43.795 00:39:43.795 00:39:43.795 CUnit - A unit testing framework for C - Version 2.1-3 00:39:43.795 http://cunit.sourceforge.net/ 00:39:43.795 00:39:43.795 00:39:43.795 Suite: bdevio tests on: nvme3n1 00:39:43.795 Test: blockdev write read block ...passed 00:39:43.795 Test: blockdev write zeroes read block ...passed 00:39:43.795 Test: blockdev write zeroes read no split ...passed 00:39:43.795 Test: blockdev write zeroes read split ...passed 00:39:44.054 Test: blockdev write zeroes read split partial ...passed 00:39:44.054 Test: blockdev reset ...passed 00:39:44.054 Test: blockdev write read 8 blocks ...passed 00:39:44.054 Test: blockdev write read size > 128k ...passed 00:39:44.054 Test: blockdev write read invalid size ...passed 00:39:44.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.054 Test: blockdev write read max offset ...passed 00:39:44.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.055 Test: blockdev writev readv 8 blocks ...passed 00:39:44.055 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.055 Test: blockdev writev readv block ...passed 00:39:44.055 Test: blockdev writev readv size > 128k ...passed 00:39:44.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.055 Test: blockdev comparev and writev ...passed 00:39:44.055 Test: blockdev nvme passthru rw ...passed 00:39:44.055 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.055 Test: blockdev nvme admin passthru ...passed 00:39:44.055 Test: blockdev copy ...passed 00:39:44.055 Suite: bdevio tests on: nvme2n1 00:39:44.055 Test: blockdev write read block ...passed 00:39:44.055 Test: blockdev write zeroes read block ...passed 00:39:44.055 Test: blockdev write zeroes read no split ...passed 00:39:44.055 Test: blockdev write zeroes read split ...passed 00:39:44.055 Test: blockdev write zeroes read split partial ...passed 00:39:44.055 Test: blockdev reset ...passed 00:39:44.055 Test: blockdev write read 8 blocks ...passed 00:39:44.055 Test: blockdev write read size > 128k ...passed 00:39:44.055 Test: blockdev write read invalid size ...passed 00:39:44.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.055 Test: blockdev write read max offset ...passed 00:39:44.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.055 Test: blockdev writev readv 8 blocks ...passed 00:39:44.055 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.055 Test: blockdev writev readv block ...passed 00:39:44.055 Test: blockdev writev readv size > 128k ...passed 00:39:44.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.055 Test: blockdev comparev and writev ...passed 00:39:44.055 Test: blockdev nvme passthru rw ...passed 00:39:44.055 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.055 Test: blockdev nvme admin passthru ...passed 00:39:44.055 Test: blockdev copy ...passed 00:39:44.055 Suite: bdevio tests on: nvme1n1 00:39:44.055 Test: blockdev write read block ...passed 00:39:44.055 Test: blockdev write zeroes read block ...passed 00:39:44.055 Test: blockdev write zeroes read no split ...passed 00:39:44.055 Test: blockdev write zeroes read split ...passed 00:39:44.055 Test: blockdev write zeroes read split partial ...passed 00:39:44.055 Test: blockdev reset ...passed 00:39:44.055 Test: blockdev write read 8 blocks ...passed 00:39:44.055 Test: blockdev write read size > 128k ...passed 00:39:44.055 Test: blockdev write read invalid size ...passed 00:39:44.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.055 Test: blockdev write read max offset ...passed 00:39:44.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.055 Test: blockdev writev readv 8 blocks ...passed 00:39:44.055 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.055 Test: blockdev writev readv block ...passed 00:39:44.055 Test: blockdev writev readv size > 128k ...passed 00:39:44.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.055 Test: blockdev comparev and writev ...passed 00:39:44.055 Test: blockdev nvme passthru rw ...passed 00:39:44.055 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.055 Test: blockdev nvme admin passthru ...passed 00:39:44.055 Test: blockdev copy ...passed 00:39:44.055 Suite: bdevio tests on: nvme0n3 00:39:44.055 Test: blockdev write read block ...passed 00:39:44.055 Test: blockdev write zeroes read block ...passed 00:39:44.055 Test: blockdev write zeroes read no split ...passed 00:39:44.314 Test: blockdev write zeroes read split ...passed 00:39:44.314 Test: blockdev write zeroes read split partial ...passed 00:39:44.314 Test: blockdev reset ...passed 00:39:44.314 Test: blockdev write read 8 blocks ...passed 00:39:44.314 Test: blockdev write read size > 128k ...passed 00:39:44.314 Test: blockdev write read invalid size ...passed 00:39:44.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.314 Test: blockdev write read max offset ...passed 00:39:44.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.314 Test: blockdev writev readv 8 blocks ...passed 00:39:44.314 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.314 Test: blockdev writev readv block ...passed 00:39:44.314 Test: blockdev writev readv size > 128k ...passed 00:39:44.314 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.314 Test: blockdev comparev and writev ...passed 00:39:44.314 Test: blockdev nvme passthru rw ...passed 00:39:44.314 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.314 Test: blockdev nvme admin passthru ...passed 00:39:44.314 Test: blockdev copy ...passed 00:39:44.314 Suite: bdevio tests on: nvme0n2 00:39:44.314 Test: blockdev write read block ...passed 00:39:44.314 Test: blockdev write zeroes read block ...passed 00:39:44.314 Test: blockdev write zeroes read no split ...passed 00:39:44.314 Test: blockdev write zeroes read split ...passed 00:39:44.314 Test: blockdev write zeroes read split partial ...passed 00:39:44.314 Test: blockdev reset ...passed 00:39:44.314 Test: blockdev write read 8 blocks ...passed 00:39:44.314 Test: blockdev write read size > 128k ...passed 00:39:44.314 Test: blockdev write read invalid size ...passed 00:39:44.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.314 Test: blockdev write read max offset ...passed 00:39:44.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.314 Test: blockdev writev readv 8 blocks ...passed 00:39:44.314 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.314 Test: blockdev writev readv block ...passed 00:39:44.314 Test: blockdev writev readv size > 128k ...passed 00:39:44.314 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.314 Test: blockdev comparev and writev ...passed 00:39:44.314 Test: blockdev nvme passthru rw ...passed 00:39:44.314 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.314 Test: blockdev nvme admin passthru ...passed 00:39:44.314 Test: blockdev copy ...passed 00:39:44.314 Suite: bdevio tests on: nvme0n1 00:39:44.314 Test: blockdev write read block ...passed 00:39:44.314 Test: blockdev write zeroes read block ...passed 00:39:44.314 Test: blockdev write zeroes read no split ...passed 00:39:44.314 Test: blockdev write zeroes read split ...passed 00:39:44.314 Test: blockdev write zeroes read split partial ...passed 00:39:44.314 Test: blockdev reset ...passed 00:39:44.314 Test: blockdev write read 8 blocks ...passed 00:39:44.314 Test: blockdev write read size > 128k ...passed 00:39:44.573 Test: blockdev write read invalid size ...passed 00:39:44.573 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.573 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.573 Test: blockdev write read max offset ...passed 00:39:44.573 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.573 Test: blockdev writev readv 8 blocks ...passed 00:39:44.573 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.573 Test: blockdev writev readv block ...passed 00:39:44.573 Test: blockdev writev readv size > 128k ...passed 00:39:44.573 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.573 Test: blockdev comparev and writev ...passed 00:39:44.573 Test: blockdev nvme passthru rw ...passed 00:39:44.573 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.573 Test: blockdev nvme admin passthru ...passed 00:39:44.573 Test: blockdev copy ...passed 00:39:44.573 00:39:44.573 Run Summary: Type Total Ran Passed Failed Inactive 00:39:44.573 suites 6 6 n/a 0 0 00:39:44.573 tests 138 138 138 0 0 00:39:44.573 asserts 780 780 780 0 n/a 00:39:44.573 00:39:44.573 Elapsed time = 1.608 seconds 00:39:44.573 0 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74572 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74572 ']' 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74572 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74572 00:39:44.573 killing process with pid 74572 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74572' 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74572 00:39:44.573 17:36:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74572 00:39:45.959 ************************************ 00:39:45.959 END TEST bdev_bounds 00:39:45.959 ************************************ 00:39:45.959 17:36:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:39:45.959 00:39:45.959 real 0m3.005s 00:39:45.959 user 0m7.418s 00:39:45.959 sys 0m0.504s 00:39:45.959 17:36:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:45.959 17:36:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:45.959 17:36:23 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:39:45.959 17:36:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:45.959 17:36:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:45.959 17:36:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:45.959 ************************************ 00:39:45.959 START TEST bdev_nbd 00:39:45.959 ************************************ 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74643 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74643 /var/tmp/spdk-nbd.sock 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74643 ']' 00:39:45.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:45.959 17:36:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:45.959 [2024-11-26 17:36:23.330590] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:45.959 [2024-11-26 17:36:23.331507] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:46.218 [2024-11-26 17:36:23.517998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.218 [2024-11-26 17:36:23.659005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:46.787 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.047 1+0 records in 00:39:47.047 1+0 records out 00:39:47.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659194 s, 6.2 MB/s 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:47.047 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:39:47.306 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:39:47.306 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.307 1+0 records in 00:39:47.307 1+0 records out 00:39:47.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546382 s, 7.5 MB/s 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:47.307 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.566 1+0 records in 00:39:47.566 1+0 records out 00:39:47.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636845 s, 6.4 MB/s 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.566 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:47.567 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.567 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:47.567 17:36:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:47.567 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:47.567 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:47.567 17:36:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.826 1+0 records in 00:39:47.826 1+0 records out 00:39:47.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735621 s, 5.6 MB/s 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:47.826 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:48.086 1+0 records in 00:39:48.086 1+0 records out 00:39:48.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672775 s, 6.1 MB/s 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:48.086 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:48.346 1+0 records in 00:39:48.346 1+0 records out 00:39:48.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679586 s, 6.0 MB/s 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:48.346 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:48.605 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd0", 00:39:48.605 "bdev_name": "nvme0n1" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd1", 00:39:48.605 "bdev_name": "nvme0n2" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd2", 00:39:48.605 "bdev_name": "nvme0n3" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd3", 00:39:48.605 "bdev_name": "nvme1n1" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd4", 00:39:48.605 "bdev_name": "nvme2n1" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd5", 00:39:48.605 "bdev_name": "nvme3n1" 00:39:48.605 } 00:39:48.605 ]' 00:39:48.605 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:48.605 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd0", 00:39:48.605 "bdev_name": "nvme0n1" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd1", 00:39:48.605 "bdev_name": "nvme0n2" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd2", 00:39:48.605 "bdev_name": "nvme0n3" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd3", 00:39:48.605 "bdev_name": "nvme1n1" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd4", 00:39:48.605 "bdev_name": "nvme2n1" 00:39:48.605 }, 00:39:48.605 { 00:39:48.605 "nbd_device": "/dev/nbd5", 00:39:48.605 "bdev_name": "nvme3n1" 00:39:48.605 } 00:39:48.605 ]' 00:39:48.605 17:36:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:48.605 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:48.864 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.124 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.383 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.643 17:36:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.902 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:50.162 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:50.422 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:39:50.682 /dev/nbd0 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:50.682 1+0 records in 00:39:50.682 1+0 records out 00:39:50.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727101 s, 5.6 MB/s 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:50.682 17:36:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:39:50.948 /dev/nbd1 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:50.948 1+0 records in 00:39:50.948 1+0 records out 00:39:50.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512296 s, 8.0 MB/s 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:50.948 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:39:51.213 /dev/nbd10 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:51.213 1+0 records in 00:39:51.213 1+0 records out 00:39:51.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997256 s, 4.1 MB/s 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:51.213 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:39:51.472 /dev/nbd11 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:51.472 1+0 records in 00:39:51.472 1+0 records out 00:39:51.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908782 s, 4.5 MB/s 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:51.472 17:36:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:39:51.731 /dev/nbd12 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:51.731 1+0 records in 00:39:51.731 1+0 records out 00:39:51.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909811 s, 4.5 MB/s 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:51.731 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:39:51.989 /dev/nbd13 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:51.989 1+0 records in 00:39:51.989 1+0 records out 00:39:51.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568277 s, 7.2 MB/s 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:51.989 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd0", 00:39:52.247 "bdev_name": "nvme0n1" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd1", 00:39:52.247 "bdev_name": "nvme0n2" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd10", 00:39:52.247 "bdev_name": "nvme0n3" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd11", 00:39:52.247 "bdev_name": "nvme1n1" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd12", 00:39:52.247 "bdev_name": "nvme2n1" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd13", 00:39:52.247 "bdev_name": "nvme3n1" 00:39:52.247 } 00:39:52.247 ]' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd0", 00:39:52.247 "bdev_name": "nvme0n1" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd1", 00:39:52.247 "bdev_name": "nvme0n2" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd10", 00:39:52.247 "bdev_name": "nvme0n3" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd11", 00:39:52.247 "bdev_name": "nvme1n1" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd12", 00:39:52.247 "bdev_name": "nvme2n1" 00:39:52.247 }, 00:39:52.247 { 00:39:52.247 "nbd_device": "/dev/nbd13", 00:39:52.247 "bdev_name": "nvme3n1" 00:39:52.247 } 00:39:52.247 ]' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:52.247 /dev/nbd1 00:39:52.247 /dev/nbd10 00:39:52.247 /dev/nbd11 00:39:52.247 /dev/nbd12 00:39:52.247 /dev/nbd13' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:52.247 /dev/nbd1 00:39:52.247 /dev/nbd10 00:39:52.247 /dev/nbd11 00:39:52.247 /dev/nbd12 00:39:52.247 /dev/nbd13' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:52.247 256+0 records in 00:39:52.247 256+0 records out 00:39:52.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138422 s, 75.8 MB/s 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:52.247 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:52.507 256+0 records in 00:39:52.507 256+0 records out 00:39:52.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0794231 s, 13.2 MB/s 00:39:52.507 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:52.507 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:52.507 256+0 records in 00:39:52.507 256+0 records out 00:39:52.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0961495 s, 10.9 MB/s 00:39:52.507 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:52.507 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:39:52.507 256+0 records in 00:39:52.507 256+0 records out 00:39:52.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0961755 s, 10.9 MB/s 00:39:52.507 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:52.507 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:39:52.766 256+0 records in 00:39:52.766 256+0 records out 00:39:52.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0871045 s, 12.0 MB/s 00:39:52.766 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:52.766 17:36:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:39:52.766 256+0 records in 00:39:52.766 256+0 records out 00:39:52.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1076 s, 9.7 MB/s 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:39:52.766 256+0 records in 00:39:52.766 256+0 records out 00:39:52.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0866348 s, 12.1 MB/s 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:52.766 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:53.026 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:53.286 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:53.546 17:36:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:53.806 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:54.065 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:54.323 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:54.324 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:54.581 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:54.582 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:39:54.582 17:36:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:54.840 malloc_lvol_verify 00:39:54.840 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:55.098 482ad3cd-ab3b-447b-b3d7-b39a958fd8a4 00:39:55.098 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:55.356 9cf0dbed-692f-4758-9353-14452199e0b2 00:39:55.356 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:55.614 /dev/nbd0 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:39:55.614 mke2fs 1.47.0 (5-Feb-2023) 00:39:55.614 Discarding device blocks: 0/4096 done 00:39:55.614 Creating filesystem with 4096 1k blocks and 1024 inodes 00:39:55.614 00:39:55.614 Allocating group tables: 0/1 done 00:39:55.614 Writing inode tables: 0/1 done 00:39:55.614 Creating journal (1024 blocks): done 00:39:55.614 Writing superblocks and filesystem accounting information: 0/1 done 00:39:55.614 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:55.614 17:36:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74643 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74643 ']' 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74643 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74643 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74643' 00:39:55.873 killing process with pid 74643 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74643 00:39:55.873 17:36:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74643 00:39:57.269 17:36:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:39:57.269 00:39:57.269 real 0m11.270s 00:39:57.269 user 0m14.984s 00:39:57.269 sys 0m4.406s 00:39:57.269 17:36:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:57.269 17:36:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:57.269 ************************************ 00:39:57.269 END TEST bdev_nbd 00:39:57.269 ************************************ 00:39:57.269 17:36:34 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:39:57.269 17:36:34 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:39:57.269 17:36:34 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:39:57.269 17:36:34 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:39:57.269 17:36:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:57.269 17:36:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.269 17:36:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:57.269 ************************************ 00:39:57.269 START TEST bdev_fio 00:39:57.269 ************************************ 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:39:57.269 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:39:57.269 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:57.270 ************************************ 00:39:57.270 START TEST bdev_fio_rw_verify 00:39:57.270 ************************************ 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:57.270 17:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:57.528 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:57.528 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:57.528 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:57.528 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:57.528 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:57.528 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:57.528 fio-3.35 00:39:57.528 Starting 6 threads 00:40:09.730 00:40:09.730 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75055: Tue Nov 26 17:36:45 2024 00:40:09.730 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(1312MiB/10001msec) 00:40:09.730 slat (usec): min=2, max=767, avg= 8.20, stdev= 5.86 00:40:09.730 clat (usec): min=84, max=73632, avg=439.04, stdev=279.35 00:40:09.730 lat (usec): min=92, max=73635, avg=447.24, stdev=280.52 00:40:09.730 clat percentiles (usec): 00:40:09.730 | 50.000th=[ 400], 99.000th=[ 1106], 99.900th=[ 1860], 99.990th=[ 3654], 00:40:09.730 | 99.999th=[ 4555] 00:40:09.730 write: IOPS=33.9k, BW=133MiB/s (139MB/s)(1325MiB/10001msec); 0 zone resets 00:40:09.730 slat (usec): min=13, max=1254, avg=37.39, stdev=44.14 00:40:09.730 clat (usec): min=66, max=92317, avg=620.73, stdev=977.29 00:40:09.730 lat (usec): min=80, max=92346, avg=658.12, stdev=979.97 00:40:09.730 clat percentiles (usec): 00:40:09.730 | 50.000th=[ 578], 99.000th=[ 1418], 99.900th=[ 1909], 99.990th=[68682], 00:40:09.730 | 99.999th=[91751] 00:40:09.730 bw ( KiB/s): min=104704, max=163637, per=99.96%, avg=135638.89, stdev=2715.76, samples=114 00:40:09.730 iops : min=26176, max=40909, avg=33909.26, stdev=678.93, samples=114 00:40:09.730 lat (usec) : 100=0.01%, 250=14.26%, 500=38.69%, 750=28.42%, 1000=13.17% 00:40:09.730 lat (msec) : 2=5.38%, 4=0.06%, 10=0.01%, 100=0.01% 00:40:09.730 cpu : usr=47.59%, sys=32.59%, ctx=9116, majf=0, minf=27843 00:40:09.730 IO depths : 1=11.7%, 2=24.0%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:09.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.730 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.731 issued rwts: total=335765,339261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.731 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:09.731 00:40:09.731 Run status group 0 (all jobs): 00:40:09.731 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1312MiB (1375MB), run=10001-10001msec 00:40:09.731 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1325MiB (1390MB), run=10001-10001msec 00:40:09.990 ----------------------------------------------------- 00:40:09.990 Suppressions used: 00:40:09.990 count bytes template 00:40:09.990 6 48 /usr/src/fio/parse.c 00:40:09.990 3238 310848 /usr/src/fio/iolog.c 00:40:09.990 1 8 libtcmalloc_minimal.so 00:40:09.990 1 904 libcrypto.so 00:40:09.990 ----------------------------------------------------- 00:40:09.990 00:40:09.990 00:40:09.990 real 0m12.777s 00:40:09.990 user 0m30.750s 00:40:09.990 sys 0m20.064s 00:40:09.990 17:36:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.990 17:36:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:40:09.990 ************************************ 00:40:09.990 END TEST bdev_fio_rw_verify 00:40:09.990 ************************************ 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "215e04e3-49ca-402c-89e7-38b1ab7509b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "215e04e3-49ca-402c-89e7-38b1ab7509b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dd169eb3-fbaa-44a0-934f-dfdf42cee47c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dd169eb3-fbaa-44a0-934f-dfdf42cee47c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5994a548-a117-409d-b292-589ceffe8f2d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5994a548-a117-409d-b292-589ceffe8f2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f8a4b2e4-8c0f-4ad3-8d92-39e3def1ce25"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f8a4b2e4-8c0f-4ad3-8d92-39e3def1ce25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d7a0740c-bd5f-4ab8-ba2f-2fa380a3bcfa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d7a0740c-bd5f-4ab8-ba2f-2fa380a3bcfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d2933865-e4ca-43b1-8eb4-662115daf357"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d2933865-e4ca-43b1-8eb4-662115daf357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:40:10.251 /home/vagrant/spdk_repo/spdk 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:40:10.251 00:40:10.251 real 0m12.997s 00:40:10.251 user 0m30.865s 00:40:10.251 sys 0m20.172s 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:10.251 17:36:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:10.251 ************************************ 00:40:10.251 END TEST bdev_fio 00:40:10.251 ************************************ 00:40:10.251 17:36:47 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:10.251 17:36:47 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:10.251 17:36:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:40:10.251 17:36:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.251 17:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:10.251 ************************************ 00:40:10.251 START TEST bdev_verify 00:40:10.251 ************************************ 00:40:10.251 17:36:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:10.251 [2024-11-26 17:36:47.679866] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:10.251 [2024-11-26 17:36:47.680009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75224 ] 00:40:10.511 [2024-11-26 17:36:47.860276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:10.769 [2024-11-26 17:36:48.004060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.769 [2024-11-26 17:36:48.004098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:11.337 Running I/O for 5 seconds... 00:40:13.663 25632.00 IOPS, 100.12 MiB/s [2024-11-26T17:36:52.047Z] 25136.00 IOPS, 98.19 MiB/s [2024-11-26T17:36:52.984Z] 24949.33 IOPS, 97.46 MiB/s [2024-11-26T17:36:53.921Z] 24600.00 IOPS, 96.09 MiB/s 00:40:16.475 Latency(us) 00:40:16.475 [2024-11-26T17:36:53.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:16.475 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.475 Verification LBA range: start 0x0 length 0x80000 00:40:16.475 nvme0n1 : 5.07 1793.86 7.01 0.00 0.00 71241.57 9501.29 70057.70 00:40:16.475 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.475 Verification LBA range: start 0x80000 length 0x80000 00:40:16.475 nvme0n1 : 5.06 1924.04 7.52 0.00 0.00 66414.89 10932.21 61357.72 00:40:16.475 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.475 Verification LBA range: start 0x0 length 0x80000 00:40:16.475 nvme0n2 : 5.06 1795.68 7.01 0.00 0.00 71062.54 11790.76 61357.72 00:40:16.475 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.475 Verification LBA range: start 0x80000 length 0x80000 00:40:16.475 nvme0n2 : 5.06 1922.01 7.51 0.00 0.00 66391.36 9844.71 62273.51 00:40:16.475 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.475 Verification LBA range: start 0x0 length 0x80000 00:40:16.475 nvme0n3 : 5.07 1793.00 7.00 0.00 0.00 71074.35 10760.50 64105.08 00:40:16.475 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.475 Verification LBA range: start 0x80000 length 0x80000 00:40:16.475 nvme0n3 : 5.05 1900.11 7.42 0.00 0.00 67064.85 14767.06 63189.30 00:40:16.475 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.476 Verification LBA range: start 0x0 length 0xa0000 00:40:16.476 nvme1n1 : 5.05 1750.06 6.84 0.00 0.00 72715.03 9615.76 90205.01 00:40:16.476 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.476 Verification LBA range: start 0xa0000 length 0xa0000 00:40:16.476 nvme1n1 : 5.07 1819.32 7.11 0.00 0.00 69953.05 11275.63 95699.73 00:40:16.476 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.476 Verification LBA range: start 0x0 length 0xbd0bd 00:40:16.476 nvme2n1 : 5.06 2733.02 10.68 0.00 0.00 46423.22 3190.94 56320.89 00:40:16.476 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.476 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:40:16.476 nvme2n1 : 5.07 2734.55 10.68 0.00 0.00 46353.74 4664.79 60441.94 00:40:16.476 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.476 Verification LBA range: start 0x0 length 0x20000 00:40:16.476 nvme3n1 : 5.07 1817.03 7.10 0.00 0.00 69711.83 6582.22 69141.91 00:40:16.476 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.476 Verification LBA range: start 0x20000 length 0x20000 00:40:16.476 nvme3n1 : 5.06 1923.01 7.51 0.00 0.00 66006.23 8986.16 56091.95 00:40:16.476 [2024-11-26T17:36:53.922Z] =================================================================================================================== 00:40:16.476 [2024-11-26T17:36:53.922Z] Total : 23905.70 93.38 0.00 0.00 63892.29 3190.94 95699.73 00:40:17.894 00:40:17.894 real 0m7.360s 00:40:17.894 user 0m11.473s 00:40:17.894 sys 0m1.953s 00:40:17.894 17:36:54 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:17.894 17:36:54 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:40:17.894 ************************************ 00:40:17.894 END TEST bdev_verify 00:40:17.894 ************************************ 00:40:17.894 17:36:54 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:17.894 17:36:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:40:17.894 17:36:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.894 17:36:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:17.894 ************************************ 00:40:17.894 START TEST bdev_verify_big_io 00:40:17.894 ************************************ 00:40:17.894 17:36:55 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:17.894 [2024-11-26 17:36:55.086173] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:17.894 [2024-11-26 17:36:55.086330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75325 ] 00:40:17.894 [2024-11-26 17:36:55.268511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:18.151 [2024-11-26 17:36:55.418404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.151 [2024-11-26 17:36:55.418421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:18.720 Running I/O for 5 seconds... 00:40:23.793 1760.00 IOPS, 110.00 MiB/s [2024-11-26T17:37:02.175Z] 3464.00 IOPS, 216.50 MiB/s [2024-11-26T17:37:02.175Z] 3645.33 IOPS, 227.83 MiB/s 00:40:24.729 Latency(us) 00:40:24.729 [2024-11-26T17:37:02.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.729 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x0 length 0x8000 00:40:24.729 nvme0n1 : 5.71 204.39 12.77 0.00 0.00 604771.10 19689.42 783913.59 00:40:24.729 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x8000 length 0x8000 00:40:24.729 nvme0n1 : 5.77 112.24 7.02 0.00 0.00 1042258.41 85626.08 974397.26 00:40:24.729 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x0 length 0x8000 00:40:24.729 nvme0n2 : 5.63 201.60 12.60 0.00 0.00 605018.32 68684.02 681345.45 00:40:24.729 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x8000 length 0x8000 00:40:24.729 nvme0n2 : 5.81 133.67 8.35 0.00 0.00 921064.48 29534.13 974397.26 00:40:24.729 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x0 length 0x8000 00:40:24.729 nvme0n3 : 5.75 217.15 13.57 0.00 0.00 546956.11 81047.14 542145.84 00:40:24.729 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x8000 length 0x8000 00:40:24.729 nvme0n3 : 5.80 99.39 6.21 0.00 0.00 1203332.86 143778.54 1838900.09 00:40:24.729 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x0 length 0xa000 00:40:24.729 nvme1n1 : 5.73 209.27 13.08 0.00 0.00 557482.41 9558.53 959744.67 00:40:24.729 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0xa000 length 0xa000 00:40:24.729 nvme1n1 : 5.81 121.07 7.57 0.00 0.00 974333.93 12821.02 2183235.97 00:40:24.729 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x0 length 0xbd0b 00:40:24.729 nvme2n1 : 5.74 220.05 13.75 0.00 0.00 520056.15 13965.75 1304080.54 00:40:24.729 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0xbd0b length 0xbd0b 00:40:24.729 nvme2n1 : 5.81 165.27 10.33 0.00 0.00 696083.45 10130.89 959744.67 00:40:24.729 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x0 length 0x2000 00:40:24.729 nvme3n1 : 5.74 211.85 13.24 0.00 0.00 527188.76 7383.53 1355364.61 00:40:24.729 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:24.729 Verification LBA range: start 0x2000 length 0x2000 00:40:24.729 nvme3n1 : 5.82 110.01 6.88 0.00 0.00 1014288.99 11676.28 2857255.13 00:40:24.729 [2024-11-26T17:37:02.176Z] =================================================================================================================== 00:40:24.730 [2024-11-26T17:37:02.176Z] Total : 2005.96 125.37 0.00 0.00 704770.41 7383.53 2857255.13 00:40:26.100 00:40:26.100 real 0m8.497s 00:40:26.100 user 0m15.359s 00:40:26.100 sys 0m0.650s 00:40:26.100 17:37:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.100 17:37:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:40:26.100 ************************************ 00:40:26.100 END TEST bdev_verify_big_io 00:40:26.100 ************************************ 00:40:26.359 17:37:03 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:26.359 17:37:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:40:26.359 17:37:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.359 17:37:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:26.359 ************************************ 00:40:26.359 START TEST bdev_write_zeroes 00:40:26.359 ************************************ 00:40:26.359 17:37:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:26.359 [2024-11-26 17:37:03.666749] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:26.359 [2024-11-26 17:37:03.666876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75447 ] 00:40:26.618 [2024-11-26 17:37:03.847059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.618 [2024-11-26 17:37:03.982425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.185 Running I/O for 1 seconds... 00:40:28.123 49344.00 IOPS, 192.75 MiB/s 00:40:28.124 Latency(us) 00:40:28.124 [2024-11-26T17:37:05.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.124 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:28.124 nvme0n1 : 1.02 7551.34 29.50 0.00 0.00 16936.17 8699.98 30449.91 00:40:28.124 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:28.124 nvme0n2 : 1.02 7544.62 29.47 0.00 0.00 16938.89 8757.21 31823.59 00:40:28.124 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:28.124 nvme0n3 : 1.02 7537.99 29.45 0.00 0.00 16939.47 8699.98 33426.22 00:40:28.124 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:28.124 nvme1n1 : 1.02 7531.52 29.42 0.00 0.00 16941.25 8699.98 33884.12 00:40:28.124 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:28.124 nvme2n1 : 1.03 11544.99 45.10 0.00 0.00 11041.10 5265.77 21063.10 00:40:28.124 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:28.124 nvme3n1 : 1.02 7524.01 29.39 0.00 0.00 16834.75 7498.01 32739.38 00:40:28.124 [2024-11-26T17:37:05.570Z] =================================================================================================================== 00:40:28.124 [2024-11-26T17:37:05.570Z] Total : 49234.47 192.32 0.00 0.00 15533.08 5265.77 33884.12 00:40:29.502 00:40:29.502 real 0m3.266s 00:40:29.502 user 0m2.481s 00:40:29.502 sys 0m0.612s 00:40:29.502 17:37:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.502 17:37:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:40:29.502 ************************************ 00:40:29.502 END TEST bdev_write_zeroes 00:40:29.502 ************************************ 00:40:29.502 17:37:06 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:29.502 17:37:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:40:29.502 17:37:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.502 17:37:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:29.502 ************************************ 00:40:29.502 START TEST bdev_json_nonenclosed 00:40:29.502 ************************************ 00:40:29.502 17:37:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:29.761 [2024-11-26 17:37:06.996659] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:29.761 [2024-11-26 17:37:06.997146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75503 ] 00:40:29.761 [2024-11-26 17:37:07.171692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.020 [2024-11-26 17:37:07.314249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.020 [2024-11-26 17:37:07.314391] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:40:30.020 [2024-11-26 17:37:07.314412] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:30.020 [2024-11-26 17:37:07.314424] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:30.278 00:40:30.278 real 0m0.674s 00:40:30.278 user 0m0.433s 00:40:30.278 sys 0m0.136s 00:40:30.278 17:37:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.278 17:37:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:40:30.278 ************************************ 00:40:30.278 END TEST bdev_json_nonenclosed 00:40:30.278 ************************************ 00:40:30.278 17:37:07 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:30.278 17:37:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:40:30.278 17:37:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.278 17:37:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:30.278 ************************************ 00:40:30.278 START TEST bdev_json_nonarray 00:40:30.278 ************************************ 00:40:30.278 17:37:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:30.536 [2024-11-26 17:37:07.727449] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:30.536 [2024-11-26 17:37:07.727578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75534 ] 00:40:30.536 [2024-11-26 17:37:07.903951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.795 [2024-11-26 17:37:08.047682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.795 [2024-11-26 17:37:08.047816] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:40:30.795 [2024-11-26 17:37:08.047837] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:30.795 [2024-11-26 17:37:08.047848] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:31.055 00:40:31.055 real 0m0.693s 00:40:31.055 user 0m0.454s 00:40:31.055 sys 0m0.134s 00:40:31.055 17:37:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.055 17:37:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:40:31.055 ************************************ 00:40:31.055 END TEST bdev_json_nonarray 00:40:31.055 ************************************ 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:40:31.055 17:37:08 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:31.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:50.106 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:50.106 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:50.106 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:40:50.106 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:40:50.106 00:40:50.106 real 1m14.133s 00:40:50.106 user 1m33.100s 00:40:50.106 sys 1m3.137s 00:40:50.106 17:37:26 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:50.106 17:37:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:50.106 ************************************ 00:40:50.106 END TEST blockdev_xnvme 00:40:50.106 ************************************ 00:40:50.106 17:37:26 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:40:50.106 17:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:50.106 17:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:50.106 17:37:26 -- common/autotest_common.sh@10 -- # set +x 00:40:50.106 ************************************ 00:40:50.106 START TEST ublk 00:40:50.106 ************************************ 00:40:50.106 17:37:26 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:40:50.106 * Looking for test storage... 00:40:50.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:40:50.106 17:37:26 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:50.106 17:37:26 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:50.106 17:37:26 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:40:50.106 17:37:26 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:50.106 17:37:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:50.106 17:37:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:40:50.106 17:37:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:40:50.106 17:37:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:40:50.106 17:37:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:50.106 17:37:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:40:50.106 17:37:26 ublk -- scripts/common.sh@345 -- # : 1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:50.106 17:37:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:50.106 17:37:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@353 -- # local d=1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:50.106 17:37:26 ublk -- scripts/common.sh@355 -- # echo 1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:40:50.106 17:37:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@353 -- # local d=2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:50.106 17:37:26 ublk -- scripts/common.sh@355 -- # echo 2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:40:50.106 17:37:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:50.106 17:37:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:50.106 17:37:26 ublk -- scripts/common.sh@368 -- # return 0 00:40:50.106 17:37:26 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.107 --rc genhtml_branch_coverage=1 00:40:50.107 --rc genhtml_function_coverage=1 00:40:50.107 --rc genhtml_legend=1 00:40:50.107 --rc geninfo_all_blocks=1 00:40:50.107 --rc geninfo_unexecuted_blocks=1 00:40:50.107 00:40:50.107 ' 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.107 --rc genhtml_branch_coverage=1 00:40:50.107 --rc genhtml_function_coverage=1 00:40:50.107 --rc genhtml_legend=1 00:40:50.107 --rc geninfo_all_blocks=1 00:40:50.107 --rc geninfo_unexecuted_blocks=1 00:40:50.107 00:40:50.107 ' 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.107 --rc genhtml_branch_coverage=1 00:40:50.107 --rc genhtml_function_coverage=1 00:40:50.107 --rc genhtml_legend=1 00:40:50.107 --rc geninfo_all_blocks=1 00:40:50.107 --rc geninfo_unexecuted_blocks=1 00:40:50.107 00:40:50.107 ' 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.107 --rc genhtml_branch_coverage=1 00:40:50.107 --rc genhtml_function_coverage=1 00:40:50.107 --rc genhtml_legend=1 00:40:50.107 --rc geninfo_all_blocks=1 00:40:50.107 --rc geninfo_unexecuted_blocks=1 00:40:50.107 00:40:50.107 ' 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:40:50.107 17:37:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:40:50.107 17:37:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:40:50.107 17:37:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:40:50.107 17:37:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:40:50.107 17:37:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:40:50.107 17:37:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:40:50.107 17:37:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:40:50.107 17:37:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:40:50.107 17:37:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:50.107 17:37:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:40:50.107 ************************************ 00:40:50.107 START TEST test_save_ublk_config 00:40:50.107 ************************************ 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75939 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75939 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75939 ']' 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:50.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:50.107 17:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:40:50.107 [2024-11-26 17:37:26.549477] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:50.107 [2024-11-26 17:37:26.549636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75939 ] 00:40:50.107 [2024-11-26 17:37:26.709864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.107 [2024-11-26 17:37:26.851667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.672 17:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:50.672 17:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:40:50.672 17:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:40:50.672 17:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:40:50.672 17:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.672 17:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:40:50.672 [2024-11-26 17:37:27.915632] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:40:50.672 [2024-11-26 17:37:27.917131] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:50.672 malloc0 00:40:50.672 [2024-11-26 17:37:28.008777] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:40:50.672 [2024-11-26 17:37:28.008905] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:40:50.672 [2024-11-26 17:37:28.008919] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:40:50.672 [2024-11-26 17:37:28.008929] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:40:50.672 [2024-11-26 17:37:28.016258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:50.672 [2024-11-26 17:37:28.016293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:50.672 [2024-11-26 17:37:28.021629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:50.672 [2024-11-26 17:37:28.021759] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:40:50.672 [2024-11-26 17:37:28.044678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:40:50.672 0 00:40:50.672 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.672 17:37:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:40:50.672 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.672 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:40:50.932 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.932 17:37:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:40:50.932 "subsystems": [ 00:40:50.932 { 00:40:50.932 "subsystem": "fsdev", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "fsdev_set_opts", 00:40:50.932 "params": { 00:40:50.932 "fsdev_io_pool_size": 65535, 00:40:50.932 "fsdev_io_cache_size": 256 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "keyring", 00:40:50.932 "config": [] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "iobuf", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "iobuf_set_options", 00:40:50.932 "params": { 00:40:50.932 "small_pool_count": 8192, 00:40:50.932 "large_pool_count": 1024, 00:40:50.932 "small_bufsize": 8192, 00:40:50.932 "large_bufsize": 135168, 00:40:50.932 "enable_numa": false 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "sock", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "sock_set_default_impl", 00:40:50.932 "params": { 00:40:50.932 "impl_name": "posix" 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "sock_impl_set_options", 00:40:50.932 "params": { 00:40:50.932 "impl_name": "ssl", 00:40:50.932 "recv_buf_size": 4096, 00:40:50.932 "send_buf_size": 4096, 00:40:50.932 "enable_recv_pipe": true, 00:40:50.932 "enable_quickack": false, 00:40:50.932 "enable_placement_id": 0, 00:40:50.932 "enable_zerocopy_send_server": true, 00:40:50.932 "enable_zerocopy_send_client": false, 00:40:50.932 "zerocopy_threshold": 0, 00:40:50.932 "tls_version": 0, 00:40:50.932 "enable_ktls": false 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "sock_impl_set_options", 00:40:50.932 "params": { 00:40:50.932 "impl_name": "posix", 00:40:50.932 "recv_buf_size": 2097152, 00:40:50.932 "send_buf_size": 2097152, 00:40:50.932 "enable_recv_pipe": true, 00:40:50.932 "enable_quickack": false, 00:40:50.932 "enable_placement_id": 0, 00:40:50.932 "enable_zerocopy_send_server": true, 00:40:50.932 "enable_zerocopy_send_client": false, 00:40:50.932 "zerocopy_threshold": 0, 00:40:50.932 "tls_version": 0, 00:40:50.932 "enable_ktls": false 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "vmd", 00:40:50.932 "config": [] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "accel", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "accel_set_options", 00:40:50.932 "params": { 00:40:50.932 "small_cache_size": 128, 00:40:50.932 "large_cache_size": 16, 00:40:50.932 "task_count": 2048, 00:40:50.932 "sequence_count": 2048, 00:40:50.932 "buf_count": 2048 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "bdev", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "bdev_set_options", 00:40:50.932 "params": { 00:40:50.932 "bdev_io_pool_size": 65535, 00:40:50.932 "bdev_io_cache_size": 256, 00:40:50.932 "bdev_auto_examine": true, 00:40:50.932 "iobuf_small_cache_size": 128, 00:40:50.932 "iobuf_large_cache_size": 16 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "bdev_raid_set_options", 00:40:50.932 "params": { 00:40:50.932 "process_window_size_kb": 1024, 00:40:50.932 "process_max_bandwidth_mb_sec": 0 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "bdev_iscsi_set_options", 00:40:50.932 "params": { 00:40:50.932 "timeout_sec": 30 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "bdev_nvme_set_options", 00:40:50.932 "params": { 00:40:50.932 "action_on_timeout": "none", 00:40:50.932 "timeout_us": 0, 00:40:50.932 "timeout_admin_us": 0, 00:40:50.932 "keep_alive_timeout_ms": 10000, 00:40:50.932 "arbitration_burst": 0, 00:40:50.932 "low_priority_weight": 0, 00:40:50.932 "medium_priority_weight": 0, 00:40:50.932 "high_priority_weight": 0, 00:40:50.932 "nvme_adminq_poll_period_us": 10000, 00:40:50.932 "nvme_ioq_poll_period_us": 0, 00:40:50.932 "io_queue_requests": 0, 00:40:50.932 "delay_cmd_submit": true, 00:40:50.932 "transport_retry_count": 4, 00:40:50.932 "bdev_retry_count": 3, 00:40:50.932 "transport_ack_timeout": 0, 00:40:50.932 "ctrlr_loss_timeout_sec": 0, 00:40:50.932 "reconnect_delay_sec": 0, 00:40:50.932 "fast_io_fail_timeout_sec": 0, 00:40:50.932 "disable_auto_failback": false, 00:40:50.932 "generate_uuids": false, 00:40:50.932 "transport_tos": 0, 00:40:50.932 "nvme_error_stat": false, 00:40:50.932 "rdma_srq_size": 0, 00:40:50.932 "io_path_stat": false, 00:40:50.932 "allow_accel_sequence": false, 00:40:50.932 "rdma_max_cq_size": 0, 00:40:50.932 "rdma_cm_event_timeout_ms": 0, 00:40:50.932 "dhchap_digests": [ 00:40:50.932 "sha256", 00:40:50.932 "sha384", 00:40:50.932 "sha512" 00:40:50.932 ], 00:40:50.932 "dhchap_dhgroups": [ 00:40:50.932 "null", 00:40:50.932 "ffdhe2048", 00:40:50.932 "ffdhe3072", 00:40:50.932 "ffdhe4096", 00:40:50.932 "ffdhe6144", 00:40:50.932 "ffdhe8192" 00:40:50.932 ] 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "bdev_nvme_set_hotplug", 00:40:50.932 "params": { 00:40:50.932 "period_us": 100000, 00:40:50.932 "enable": false 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "bdev_malloc_create", 00:40:50.932 "params": { 00:40:50.932 "name": "malloc0", 00:40:50.932 "num_blocks": 8192, 00:40:50.932 "block_size": 4096, 00:40:50.932 "physical_block_size": 4096, 00:40:50.932 "uuid": "18569e6d-1b26-49f7-b4d6-57d04715de24", 00:40:50.932 "optimal_io_boundary": 0, 00:40:50.932 "md_size": 0, 00:40:50.932 "dif_type": 0, 00:40:50.932 "dif_is_head_of_md": false, 00:40:50.932 "dif_pi_format": 0 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "bdev_wait_for_examine" 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "scsi", 00:40:50.932 "config": null 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "scheduler", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "framework_set_scheduler", 00:40:50.932 "params": { 00:40:50.932 "name": "static" 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "vhost_scsi", 00:40:50.932 "config": [] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "vhost_blk", 00:40:50.932 "config": [] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "ublk", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "ublk_create_target", 00:40:50.932 "params": { 00:40:50.932 "cpumask": "1" 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "ublk_start_disk", 00:40:50.932 "params": { 00:40:50.932 "bdev_name": "malloc0", 00:40:50.932 "ublk_id": 0, 00:40:50.932 "num_queues": 1, 00:40:50.932 "queue_depth": 128 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "nbd", 00:40:50.932 "config": [] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "nvmf", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "nvmf_set_config", 00:40:50.932 "params": { 00:40:50.932 "discovery_filter": "match_any", 00:40:50.932 "admin_cmd_passthru": { 00:40:50.932 "identify_ctrlr": false 00:40:50.932 }, 00:40:50.932 "dhchap_digests": [ 00:40:50.932 "sha256", 00:40:50.932 "sha384", 00:40:50.932 "sha512" 00:40:50.932 ], 00:40:50.932 "dhchap_dhgroups": [ 00:40:50.932 "null", 00:40:50.932 "ffdhe2048", 00:40:50.932 "ffdhe3072", 00:40:50.932 "ffdhe4096", 00:40:50.932 "ffdhe6144", 00:40:50.932 "ffdhe8192" 00:40:50.932 ] 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "nvmf_set_max_subsystems", 00:40:50.932 "params": { 00:40:50.932 "max_subsystems": 1024 00:40:50.932 } 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "method": "nvmf_set_crdt", 00:40:50.932 "params": { 00:40:50.932 "crdt1": 0, 00:40:50.932 "crdt2": 0, 00:40:50.932 "crdt3": 0 00:40:50.932 } 00:40:50.932 } 00:40:50.932 ] 00:40:50.932 }, 00:40:50.932 { 00:40:50.932 "subsystem": "iscsi", 00:40:50.932 "config": [ 00:40:50.932 { 00:40:50.932 "method": "iscsi_set_options", 00:40:50.932 "params": { 00:40:50.932 "node_base": "iqn.2016-06.io.spdk", 00:40:50.932 "max_sessions": 128, 00:40:50.932 "max_connections_per_session": 2, 00:40:50.932 "max_queue_depth": 64, 00:40:50.932 "default_time2wait": 2, 00:40:50.933 "default_time2retain": 20, 00:40:50.933 "first_burst_length": 8192, 00:40:50.933 "immediate_data": true, 00:40:50.933 "allow_duplicated_isid": false, 00:40:50.933 "error_recovery_level": 0, 00:40:50.933 "nop_timeout": 60, 00:40:50.933 "nop_in_interval": 30, 00:40:50.933 "disable_chap": false, 00:40:50.933 "require_chap": false, 00:40:50.933 "mutual_chap": false, 00:40:50.933 "chap_group": 0, 00:40:50.933 "max_large_datain_per_connection": 64, 00:40:50.933 "max_r2t_per_connection": 4, 00:40:50.933 "pdu_pool_size": 36864, 00:40:50.933 "immediate_data_pool_size": 16384, 00:40:50.933 "data_out_pool_size": 2048 00:40:50.933 } 00:40:50.933 } 00:40:50.933 ] 00:40:50.933 } 00:40:50.933 ] 00:40:50.933 }' 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75939 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75939 ']' 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75939 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75939 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:50.933 killing process with pid 75939 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75939' 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75939 00:40:50.933 17:37:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75939 00:40:52.840 [2024-11-26 17:37:29.916279] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:40:52.840 [2024-11-26 17:37:29.949731] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:52.840 [2024-11-26 17:37:29.949877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:40:52.840 [2024-11-26 17:37:29.957669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:52.840 [2024-11-26 17:37:29.957730] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:40:52.840 [2024-11-26 17:37:29.957745] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:40:52.840 [2024-11-26 17:37:29.957773] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:40:52.840 [2024-11-26 17:37:29.957952] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76011 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76011 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76011 ']' 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:40:54.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.798 17:37:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:40:54.798 "subsystems": [ 00:40:54.798 { 00:40:54.798 "subsystem": "fsdev", 00:40:54.798 "config": [ 00:40:54.798 { 00:40:54.798 "method": "fsdev_set_opts", 00:40:54.798 "params": { 00:40:54.798 "fsdev_io_pool_size": 65535, 00:40:54.798 "fsdev_io_cache_size": 256 00:40:54.798 } 00:40:54.798 } 00:40:54.798 ] 00:40:54.798 }, 00:40:54.798 { 00:40:54.798 "subsystem": "keyring", 00:40:54.798 "config": [] 00:40:54.798 }, 00:40:54.798 { 00:40:54.798 "subsystem": "iobuf", 00:40:54.798 "config": [ 00:40:54.798 { 00:40:54.798 "method": "iobuf_set_options", 00:40:54.798 "params": { 00:40:54.798 "small_pool_count": 8192, 00:40:54.798 "large_pool_count": 1024, 00:40:54.798 "small_bufsize": 8192, 00:40:54.798 "large_bufsize": 135168, 00:40:54.798 "enable_numa": false 00:40:54.798 } 00:40:54.798 } 00:40:54.798 ] 00:40:54.798 }, 00:40:54.798 { 00:40:54.798 "subsystem": "sock", 00:40:54.798 "config": [ 00:40:54.798 { 00:40:54.799 "method": "sock_set_default_impl", 00:40:54.799 "params": { 00:40:54.799 "impl_name": "posix" 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "sock_impl_set_options", 00:40:54.799 "params": { 00:40:54.799 "impl_name": "ssl", 00:40:54.799 "recv_buf_size": 4096, 00:40:54.799 "send_buf_size": 4096, 00:40:54.799 "enable_recv_pipe": true, 00:40:54.799 "enable_quickack": false, 00:40:54.799 "enable_placement_id": 0, 00:40:54.799 "enable_zerocopy_send_server": true, 00:40:54.799 "enable_zerocopy_send_client": false, 00:40:54.799 "zerocopy_threshold": 0, 00:40:54.799 "tls_version": 0, 00:40:54.799 "enable_ktls": false 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "sock_impl_set_options", 00:40:54.799 "params": { 00:40:54.799 "impl_name": "posix", 00:40:54.799 "recv_buf_size": 2097152, 00:40:54.799 "send_buf_size": 2097152, 00:40:54.799 "enable_recv_pipe": true, 00:40:54.799 "enable_quickack": false, 00:40:54.799 "enable_placement_id": 0, 00:40:54.799 "enable_zerocopy_send_server": true, 00:40:54.799 "enable_zerocopy_send_client": false, 00:40:54.799 "zerocopy_threshold": 0, 00:40:54.799 "tls_version": 0, 00:40:54.799 "enable_ktls": false 00:40:54.799 } 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "vmd", 00:40:54.799 "config": [] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "accel", 00:40:54.799 "config": [ 00:40:54.799 { 00:40:54.799 "method": "accel_set_options", 00:40:54.799 "params": { 00:40:54.799 "small_cache_size": 128, 00:40:54.799 "large_cache_size": 16, 00:40:54.799 "task_count": 2048, 00:40:54.799 "sequence_count": 2048, 00:40:54.799 "buf_count": 2048 00:40:54.799 } 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "bdev", 00:40:54.799 "config": [ 00:40:54.799 { 00:40:54.799 "method": "bdev_set_options", 00:40:54.799 "params": { 00:40:54.799 "bdev_io_pool_size": 65535, 00:40:54.799 "bdev_io_cache_size": 256, 00:40:54.799 "bdev_auto_examine": true, 00:40:54.799 "iobuf_small_cache_size": 128, 00:40:54.799 "iobuf_large_cache_size": 16 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "bdev_raid_set_options", 00:40:54.799 "params": { 00:40:54.799 "process_window_size_kb": 1024, 00:40:54.799 "process_max_bandwidth_mb_sec": 0 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "bdev_iscsi_set_options", 00:40:54.799 "params": { 00:40:54.799 "timeout_sec": 30 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "bdev_nvme_set_options", 00:40:54.799 "params": { 00:40:54.799 "action_on_timeout": "none", 00:40:54.799 "timeout_us": 0, 00:40:54.799 "timeout_admin_us": 0, 00:40:54.799 "keep_alive_timeout_ms": 10000, 00:40:54.799 "arbitration_burst": 0, 00:40:54.799 "low_priority_weight": 0, 00:40:54.799 "medium_priority_weight": 0, 00:40:54.799 "high_priority_weight": 0, 00:40:54.799 "nvme_adminq_poll_period_us": 10000, 00:40:54.799 "nvme_ioq_poll_period_us": 0, 00:40:54.799 "io_queue_requests": 0, 00:40:54.799 "delay_cmd_submit": true, 00:40:54.799 "transport_retry_count": 4, 00:40:54.799 "bdev_retry_count": 3, 00:40:54.799 "transport_ack_timeout": 0, 00:40:54.799 "ctrlr_loss_timeout_sec": 0, 00:40:54.799 "reconnect_delay_sec": 0, 00:40:54.799 "fast_io_fail_timeout_sec": 0, 00:40:54.799 "disable_auto_failback": false, 00:40:54.799 "generate_uuids": false, 00:40:54.799 "transport_tos": 0, 00:40:54.799 "nvme_error_stat": false, 00:40:54.799 "rdma_srq_size": 0, 00:40:54.799 "io_path_stat": false, 00:40:54.799 "allow_accel_sequence": false, 00:40:54.799 "rdma_max_cq_size": 0, 00:40:54.799 "rdma_cm_event_timeout_ms": 0, 00:40:54.799 "dhchap_digests": [ 00:40:54.799 "sha256", 00:40:54.799 "sha384", 00:40:54.799 "sha512" 00:40:54.799 ], 00:40:54.799 "dhchap_dhgroups": [ 00:40:54.799 "null", 00:40:54.799 "ffdhe2048", 00:40:54.799 "ffdhe3072", 00:40:54.799 "ffdhe4096", 00:40:54.799 "ffdhe6144", 00:40:54.799 "ffdhe8192" 00:40:54.799 ] 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "bdev_nvme_set_hotplug", 00:40:54.799 "params": { 00:40:54.799 "period_us": 100000, 00:40:54.799 "enable": false 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "bdev_malloc_create", 00:40:54.799 "params": { 00:40:54.799 "name": "malloc0", 00:40:54.799 "num_blocks": 8192, 00:40:54.799 "block_size": 4096, 00:40:54.799 "physical_block_size": 4096, 00:40:54.799 "uuid": "18569e6d-1b26-49f7-b4d6-57d04715de24", 00:40:54.799 "optimal_io_boundary": 0, 00:40:54.799 "md_size": 0, 00:40:54.799 "dif_type": 0, 00:40:54.799 "dif_is_head_of_md": false, 00:40:54.799 "dif_pi_format": 0 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "bdev_wait_for_examine" 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "scsi", 00:40:54.799 "config": null 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "scheduler", 00:40:54.799 "config": [ 00:40:54.799 { 00:40:54.799 "method": "framework_set_scheduler", 00:40:54.799 "params": { 00:40:54.799 "name": "static" 00:40:54.799 } 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "vhost_scsi", 00:40:54.799 "config": [] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "vhost_blk", 00:40:54.799 "config": [] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "ublk", 00:40:54.799 "config": [ 00:40:54.799 { 00:40:54.799 "method": "ublk_create_target", 00:40:54.799 "params": { 00:40:54.799 "cpumask": "1" 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "ublk_start_disk", 00:40:54.799 "params": { 00:40:54.799 "bdev_name": "malloc0", 00:40:54.799 "ublk_id": 0, 00:40:54.799 "num_queues": 1, 00:40:54.799 "queue_depth": 128 00:40:54.799 } 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "nbd", 00:40:54.799 "config": [] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "nvmf", 00:40:54.799 "config": [ 00:40:54.799 { 00:40:54.799 "method": "nvmf_set_config", 00:40:54.799 "params": { 00:40:54.799 "discovery_filter": "match_any", 00:40:54.799 "admin_cmd_passthru": { 00:40:54.799 "identify_ctrlr": false 00:40:54.799 }, 00:40:54.799 "dhchap_digests": [ 00:40:54.799 "sha256", 00:40:54.799 "sha384", 00:40:54.799 "sha512" 00:40:54.799 ], 00:40:54.799 "dhchap_dhgroups": [ 00:40:54.799 "null", 00:40:54.799 "ffdhe2048", 00:40:54.799 "ffdhe3072", 00:40:54.799 "ffdhe4096", 00:40:54.799 "ffdhe6144", 00:40:54.799 "ffdhe8192" 00:40:54.799 ] 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "nvmf_set_max_subsystems", 00:40:54.799 "params": { 00:40:54.799 "max_subsystems": 1024 00:40:54.799 } 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "method": "nvmf_set_crdt", 00:40:54.799 "params": { 00:40:54.799 "crdt1": 0, 00:40:54.799 "crdt2": 0, 00:40:54.799 "crdt3": 0 00:40:54.799 } 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }, 00:40:54.799 { 00:40:54.799 "subsystem": "iscsi", 00:40:54.799 "config": [ 00:40:54.799 { 00:40:54.799 "method": "iscsi_set_options", 00:40:54.799 "params": { 00:40:54.799 "node_base": "iqn.2016-06.io.spdk", 00:40:54.799 "max_sessions": 128, 00:40:54.799 "max_connections_per_session": 2, 00:40:54.799 "max_queue_depth": 64, 00:40:54.799 "default_time2wait": 2, 00:40:54.799 "default_time2retain": 20, 00:40:54.799 "first_burst_length": 8192, 00:40:54.799 "immediate_data": true, 00:40:54.799 "allow_duplicated_isid": false, 00:40:54.799 "error_recovery_level": 0, 00:40:54.799 "nop_timeout": 60, 00:40:54.799 "nop_in_interval": 30, 00:40:54.799 "disable_chap": false, 00:40:54.799 "require_chap": false, 00:40:54.799 "mutual_chap": false, 00:40:54.799 "chap_group": 0, 00:40:54.799 "max_large_datain_per_connection": 64, 00:40:54.799 "max_r2t_per_connection": 4, 00:40:54.799 "pdu_pool_size": 36864, 00:40:54.799 "immediate_data_pool_size": 16384, 00:40:54.799 "data_out_pool_size": 2048 00:40:54.799 } 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 } 00:40:54.799 ] 00:40:54.799 }' 00:40:54.799 17:37:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:40:54.799 [2024-11-26 17:37:32.112274] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:54.799 [2024-11-26 17:37:32.112432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76011 ] 00:40:55.057 [2024-11-26 17:37:32.294275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.057 [2024-11-26 17:37:32.432766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.434 [2024-11-26 17:37:33.606637] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:40:56.434 [2024-11-26 17:37:33.607891] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:56.434 [2024-11-26 17:37:33.614767] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:40:56.434 [2024-11-26 17:37:33.614847] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:40:56.434 [2024-11-26 17:37:33.614860] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:40:56.434 [2024-11-26 17:37:33.614867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:40:56.434 [2024-11-26 17:37:33.621641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:56.434 [2024-11-26 17:37:33.621662] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:56.434 [2024-11-26 17:37:33.628655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:56.434 [2024-11-26 17:37:33.628750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:40:56.434 [2024-11-26 17:37:33.645628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76011 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76011 ']' 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76011 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76011 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76011' 00:40:56.434 killing process with pid 76011 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76011 00:40:56.434 17:37:33 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76011 00:40:58.339 [2024-11-26 17:37:35.420455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:40:58.339 [2024-11-26 17:37:35.450654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:58.339 [2024-11-26 17:37:35.450815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:40:58.339 [2024-11-26 17:37:35.459641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:58.339 [2024-11-26 17:37:35.459695] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:40:58.339 [2024-11-26 17:37:35.459703] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:40:58.339 [2024-11-26 17:37:35.459728] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:40:58.339 [2024-11-26 17:37:35.459912] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:00.242 17:37:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:41:00.242 00:41:00.242 real 0m11.015s 00:41:00.242 user 0m8.324s 00:41:00.242 sys 0m3.466s 00:41:00.242 17:37:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.242 17:37:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:00.242 ************************************ 00:41:00.242 END TEST test_save_ublk_config 00:41:00.242 ************************************ 00:41:00.242 17:37:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76108 00:41:00.242 17:37:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:00.242 17:37:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:00.242 17:37:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76108 00:41:00.242 17:37:37 ublk -- common/autotest_common.sh@835 -- # '[' -z 76108 ']' 00:41:00.242 17:37:37 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:00.242 17:37:37 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:00.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:00.242 17:37:37 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:00.242 17:37:37 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:00.242 17:37:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:00.242 [2024-11-26 17:37:37.614239] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:00.242 [2024-11-26 17:37:37.614380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76108 ] 00:41:00.501 [2024-11-26 17:37:37.783414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:00.501 [2024-11-26 17:37:37.925395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.501 [2024-11-26 17:37:37.925436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:01.881 17:37:38 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:01.881 17:37:38 ublk -- common/autotest_common.sh@868 -- # return 0 00:41:01.881 17:37:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:41:01.881 17:37:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:01.881 17:37:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.881 17:37:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:01.881 ************************************ 00:41:01.881 START TEST test_create_ublk 00:41:01.881 ************************************ 00:41:01.881 17:37:38 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:41:01.881 17:37:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:41:01.881 17:37:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.881 17:37:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:01.881 [2024-11-26 17:37:38.981648] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:01.881 [2024-11-26 17:37:38.984990] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:01.881 17:37:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.881 17:37:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:41:01.881 17:37:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:41:01.881 17:37:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.881 17:37:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:01.881 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.881 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:41:01.881 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:41:01.881 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.881 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:02.139 [2024-11-26 17:37:39.327840] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:41:02.139 [2024-11-26 17:37:39.328305] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:41:02.139 [2024-11-26 17:37:39.328327] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:02.139 [2024-11-26 17:37:39.328335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:02.139 [2024-11-26 17:37:39.335668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:02.139 [2024-11-26 17:37:39.335693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:02.139 [2024-11-26 17:37:39.343648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:02.139 [2024-11-26 17:37:39.344309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:02.139 [2024-11-26 17:37:39.366673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:02.139 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:41:02.139 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.139 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:02.139 17:37:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:41:02.139 { 00:41:02.139 "ublk_device": "/dev/ublkb0", 00:41:02.139 "id": 0, 00:41:02.139 "queue_depth": 512, 00:41:02.139 "num_queues": 4, 00:41:02.139 "bdev_name": "Malloc0" 00:41:02.139 } 00:41:02.139 ]' 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:41:02.139 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:41:02.398 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:41:02.398 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:41:02.398 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:41:02.398 17:37:39 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:41:02.398 17:37:39 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:41:02.398 fio: verification read phase will never start because write phase uses all of runtime 00:41:02.398 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:41:02.398 fio-3.35 00:41:02.398 Starting 1 process 00:41:14.652 00:41:14.652 fio_test: (groupid=0, jobs=1): err= 0: pid=76160: Tue Nov 26 17:37:49 2024 00:41:14.652 write: IOPS=17.5k, BW=68.3MiB/s (71.6MB/s)(683MiB/10002msec); 0 zone resets 00:41:14.652 clat (usec): min=33, max=4026, avg=56.43, stdev=91.02 00:41:14.652 lat (usec): min=33, max=4026, avg=56.88, stdev=91.03 00:41:14.652 clat percentiles (usec): 00:41:14.652 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:41:14.652 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 53], 00:41:14.652 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 59], 95.00th=[ 62], 00:41:14.652 | 99.00th=[ 75], 99.50th=[ 81], 99.90th=[ 1844], 99.95th=[ 2671], 00:41:14.652 | 99.99th=[ 3359] 00:41:14.652 bw ( KiB/s): min=68080, max=80934, per=100.00%, avg=70210.42, stdev=2769.71, samples=19 00:41:14.652 iops : min=17020, max=20233, avg=17552.58, stdev=692.32, samples=19 00:41:14.652 lat (usec) : 50=19.10%, 100=80.67%, 250=0.06%, 500=0.01%, 750=0.01% 00:41:14.652 lat (usec) : 1000=0.01% 00:41:14.652 lat (msec) : 2=0.05%, 4=0.09%, 10=0.01% 00:41:14.652 cpu : usr=2.42%, sys=9.70%, ctx=174816, majf=0, minf=796 00:41:14.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.652 issued rwts: total=0,174816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:14.652 00:41:14.652 Run status group 0 (all jobs): 00:41:14.652 WRITE: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=683MiB (716MB), run=10002-10002msec 00:41:14.652 00:41:14.652 Disk stats (read/write): 00:41:14.652 ublkb0: ios=0/172983, merge=0/0, ticks=0/8777, in_queue=8778, util=99.14% 00:41:14.652 17:37:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.652 [2024-11-26 17:37:49.898096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:14.652 [2024-11-26 17:37:49.938170] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:14.652 [2024-11-26 17:37:49.939019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:14.652 [2024-11-26 17:37:49.945647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:14.652 [2024-11-26 17:37:49.945975] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:14.652 [2024-11-26 17:37:49.945994] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.652 17:37:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:41:14.652 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 [2024-11-26 17:37:49.961747] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:41:14.653 request: 00:41:14.653 { 00:41:14.653 "ublk_id": 0, 00:41:14.653 "method": "ublk_stop_disk", 00:41:14.653 "req_id": 1 00:41:14.653 } 00:41:14.653 Got JSON-RPC error response 00:41:14.653 response: 00:41:14.653 { 00:41:14.653 "code": -19, 00:41:14.653 "message": "No such device" 00:41:14.653 } 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:14.653 17:37:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 [2024-11-26 17:37:49.977741] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:14.653 [2024-11-26 17:37:49.985646] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:14.653 [2024-11-26 17:37:49.985684] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:50 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:41:14.653 ************************************ 00:41:14.653 END TEST test_create_ublk 00:41:14.653 ************************************ 00:41:14.653 17:37:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:41:14.653 00:41:14.653 real 0m11.957s 00:41:14.653 user 0m0.667s 00:41:14.653 sys 0m1.091s 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.653 17:37:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 17:37:50 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:41:14.653 17:37:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:14.653 17:37:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.653 17:37:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 ************************************ 00:41:14.653 START TEST test_create_multi_ublk 00:41:14.653 ************************************ 00:41:14.653 17:37:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:41:14.653 17:37:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:41:14.653 17:37:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 [2024-11-26 17:37:50.996654] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:14.653 [2024-11-26 17:37:50.999705] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:14.653 17:37:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 [2024-11-26 17:37:51.333801] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:41:14.653 [2024-11-26 17:37:51.334319] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:41:14.653 [2024-11-26 17:37:51.334336] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:14.653 [2024-11-26 17:37:51.334351] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:14.653 [2024-11-26 17:37:51.337322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:14.653 [2024-11-26 17:37:51.337351] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:14.653 [2024-11-26 17:37:51.347631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:14.653 [2024-11-26 17:37:51.348308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:14.653 [2024-11-26 17:37:51.357178] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.653 [2024-11-26 17:37:51.705797] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:41:14.653 [2024-11-26 17:37:51.706236] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:41:14.653 [2024-11-26 17:37:51.706255] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:14.653 [2024-11-26 17:37:51.706263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:41:14.653 [2024-11-26 17:37:51.715092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:14.653 [2024-11-26 17:37:51.715114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:14.653 [2024-11-26 17:37:51.721660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:14.653 [2024-11-26 17:37:51.722390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:41:14.653 [2024-11-26 17:37:51.730654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:41:14.653 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.654 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:41:14.654 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:14.654 17:37:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:41:14.654 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.654 17:37:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.654 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.654 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:41:14.654 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:41:14.654 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.654 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:14.654 [2024-11-26 17:37:52.079764] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:41:14.654 [2024-11-26 17:37:52.080217] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:41:14.654 [2024-11-26 17:37:52.080233] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:41:14.654 [2024-11-26 17:37:52.080243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:41:14.654 [2024-11-26 17:37:52.087716] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:14.654 [2024-11-26 17:37:52.087743] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:14.654 [2024-11-26 17:37:52.094669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:14.654 [2024-11-26 17:37:52.095380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:41:14.914 [2024-11-26 17:37:52.098453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:41:14.914 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.914 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:41:14.914 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:14.914 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:41:14.914 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.914 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:15.174 [2024-11-26 17:37:52.434806] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:41:15.174 [2024-11-26 17:37:52.435282] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:41:15.174 [2024-11-26 17:37:52.435303] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:41:15.174 [2024-11-26 17:37:52.435311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:41:15.174 [2024-11-26 17:37:52.444067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:15.174 [2024-11-26 17:37:52.444089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:15.174 [2024-11-26 17:37:52.450647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:15.174 [2024-11-26 17:37:52.451331] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:41:15.174 [2024-11-26 17:37:52.454407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:41:15.174 { 00:41:15.174 "ublk_device": "/dev/ublkb0", 00:41:15.174 "id": 0, 00:41:15.174 "queue_depth": 512, 00:41:15.174 "num_queues": 4, 00:41:15.174 "bdev_name": "Malloc0" 00:41:15.174 }, 00:41:15.174 { 00:41:15.174 "ublk_device": "/dev/ublkb1", 00:41:15.174 "id": 1, 00:41:15.174 "queue_depth": 512, 00:41:15.174 "num_queues": 4, 00:41:15.174 "bdev_name": "Malloc1" 00:41:15.174 }, 00:41:15.174 { 00:41:15.174 "ublk_device": "/dev/ublkb2", 00:41:15.174 "id": 2, 00:41:15.174 "queue_depth": 512, 00:41:15.174 "num_queues": 4, 00:41:15.174 "bdev_name": "Malloc2" 00:41:15.174 }, 00:41:15.174 { 00:41:15.174 "ublk_device": "/dev/ublkb3", 00:41:15.174 "id": 3, 00:41:15.174 "queue_depth": 512, 00:41:15.174 "num_queues": 4, 00:41:15.174 "bdev_name": "Malloc3" 00:41:15.174 } 00:41:15.174 ]' 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:41:15.174 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:41:15.175 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:41:15.175 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:15.175 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:15.434 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:41:15.693 17:37:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:41:15.693 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:15.693 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:41:15.693 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:15.694 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.953 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:15.953 [2024-11-26 17:37:53.351790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:15.953 [2024-11-26 17:37:53.392679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:15.953 [2024-11-26 17:37:53.393540] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:16.213 [2024-11-26 17:37:53.401674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:16.213 [2024-11-26 17:37:53.402062] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:16.213 [2024-11-26 17:37:53.402086] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:16.213 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.213 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:16.213 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:41:16.213 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.213 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:16.213 [2024-11-26 17:37:53.409739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:41:16.213 [2024-11-26 17:37:53.441701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:16.213 [2024-11-26 17:37:53.442510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:41:16.213 [2024-11-26 17:37:53.450683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:16.213 [2024-11-26 17:37:53.451028] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:41:16.213 [2024-11-26 17:37:53.451056] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:16.214 [2024-11-26 17:37:53.465770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:41:16.214 [2024-11-26 17:37:53.501695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:16.214 [2024-11-26 17:37:53.502414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:41:16.214 [2024-11-26 17:37:53.509660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:16.214 [2024-11-26 17:37:53.509984] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:41:16.214 [2024-11-26 17:37:53.510003] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:16.214 [2024-11-26 17:37:53.525717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:41:16.214 [2024-11-26 17:37:53.568159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:16.214 [2024-11-26 17:37:53.568972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:41:16.214 [2024-11-26 17:37:53.577661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:16.214 [2024-11-26 17:37:53.577976] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:41:16.214 [2024-11-26 17:37:53.577995] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.214 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:41:16.473 [2024-11-26 17:37:53.779773] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:16.473 [2024-11-26 17:37:53.786759] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:16.473 [2024-11-26 17:37:53.786809] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:16.473 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:41:16.473 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:16.473 17:37:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:16.473 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.473 17:37:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.413 17:37:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.413 17:37:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:17.413 17:37:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:41:17.413 17:37:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.413 17:37:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.698 17:37:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.698 17:37:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:17.698 17:37:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:41:17.698 17:37:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.698 17:37:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.963 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.963 17:37:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:17.963 17:37:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:41:17.963 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.963 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:41:18.532 00:41:18.532 real 0m4.941s 00:41:18.532 user 0m1.063s 00:41:18.532 sys 0m0.208s 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.532 ************************************ 00:41:18.532 END TEST test_create_multi_ublk 00:41:18.532 ************************************ 00:41:18.532 17:37:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:18.532 17:37:55 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:18.532 17:37:55 ublk -- ublk/ublk.sh@147 -- # cleanup 00:41:18.532 17:37:55 ublk -- ublk/ublk.sh@130 -- # killprocess 76108 00:41:18.532 17:37:55 ublk -- common/autotest_common.sh@954 -- # '[' -z 76108 ']' 00:41:18.532 17:37:55 ublk -- common/autotest_common.sh@958 -- # kill -0 76108 00:41:18.792 17:37:55 ublk -- common/autotest_common.sh@959 -- # uname 00:41:18.792 17:37:55 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:18.792 17:37:55 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76108 00:41:18.792 17:37:56 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:18.792 17:37:56 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:18.792 killing process with pid 76108 00:41:18.792 17:37:56 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76108' 00:41:18.792 17:37:56 ublk -- common/autotest_common.sh@973 -- # kill 76108 00:41:18.792 17:37:56 ublk -- common/autotest_common.sh@978 -- # wait 76108 00:41:20.173 [2024-11-26 17:37:57.295507] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:20.173 [2024-11-26 17:37:57.295587] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:21.556 00:41:21.556 real 0m32.484s 00:41:21.556 user 0m46.232s 00:41:21.556 sys 0m10.819s 00:41:21.556 17:37:58 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.556 ************************************ 00:41:21.556 END TEST ublk 00:41:21.556 ************************************ 00:41:21.556 17:37:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:21.556 17:37:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:41:21.556 17:37:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:21.556 17:37:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.556 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:41:21.556 ************************************ 00:41:21.556 START TEST ublk_recovery 00:41:21.556 ************************************ 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:41:21.556 * Looking for test storage... 00:41:21.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:21.556 17:37:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:21.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.556 --rc genhtml_branch_coverage=1 00:41:21.556 --rc genhtml_function_coverage=1 00:41:21.556 --rc genhtml_legend=1 00:41:21.556 --rc geninfo_all_blocks=1 00:41:21.556 --rc geninfo_unexecuted_blocks=1 00:41:21.556 00:41:21.556 ' 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:21.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.556 --rc genhtml_branch_coverage=1 00:41:21.556 --rc genhtml_function_coverage=1 00:41:21.556 --rc genhtml_legend=1 00:41:21.556 --rc geninfo_all_blocks=1 00:41:21.556 --rc geninfo_unexecuted_blocks=1 00:41:21.556 00:41:21.556 ' 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:21.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.556 --rc genhtml_branch_coverage=1 00:41:21.556 --rc genhtml_function_coverage=1 00:41:21.556 --rc genhtml_legend=1 00:41:21.556 --rc geninfo_all_blocks=1 00:41:21.556 --rc geninfo_unexecuted_blocks=1 00:41:21.556 00:41:21.556 ' 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:21.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.556 --rc genhtml_branch_coverage=1 00:41:21.556 --rc genhtml_function_coverage=1 00:41:21.556 --rc genhtml_legend=1 00:41:21.556 --rc geninfo_all_blocks=1 00:41:21.556 --rc geninfo_unexecuted_blocks=1 00:41:21.556 00:41:21.556 ' 00:41:21.556 17:37:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:41:21.556 17:37:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:41:21.556 17:37:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:41:21.556 17:37:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76535 00:41:21.556 17:37:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:21.556 17:37:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:21.556 17:37:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76535 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76535 ']' 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:21.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:21.556 17:37:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:21.816 [2024-11-26 17:37:59.103426] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:21.816 [2024-11-26 17:37:59.103679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76535 ] 00:41:22.076 [2024-11-26 17:37:59.285809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:22.076 [2024-11-26 17:37:59.429422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.076 [2024-11-26 17:37:59.429461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.015 17:38:00 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.015 17:38:00 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:41:23.015 17:38:00 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:41:23.015 17:38:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.015 17:38:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:23.275 [2024-11-26 17:38:00.462667] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:23.275 [2024-11-26 17:38:00.465937] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.275 17:38:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:23.275 malloc0 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.275 17:38:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:23.275 [2024-11-26 17:38:00.657816] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:41:23.275 [2024-11-26 17:38:00.657949] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:41:23.275 [2024-11-26 17:38:00.657964] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:23.275 [2024-11-26 17:38:00.657974] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:41:23.275 [2024-11-26 17:38:00.665670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:23.275 [2024-11-26 17:38:00.665697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:23.275 [2024-11-26 17:38:00.673640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:23.275 [2024-11-26 17:38:00.673810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:41:23.275 [2024-11-26 17:38:00.696674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:41:23.275 1 00:41:23.275 17:38:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.275 17:38:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:41:24.657 17:38:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76577 00:41:24.657 17:38:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:41:24.657 17:38:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:41:24.657 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:24.657 fio-3.35 00:41:24.657 Starting 1 process 00:41:29.939 17:38:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76535 00:41:29.939 17:38:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:41:35.220 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76535 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:41:35.221 17:38:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:35.221 17:38:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76688 00:41:35.221 17:38:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:35.221 17:38:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76688 00:41:35.221 17:38:11 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76688 ']' 00:41:35.221 17:38:11 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:35.221 17:38:11 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:35.221 17:38:11 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:35.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:35.221 17:38:11 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:35.221 17:38:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:35.221 [2024-11-26 17:38:11.833825] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:35.221 [2024-11-26 17:38:11.834061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76688 ] 00:41:35.221 [2024-11-26 17:38:12.004437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:35.221 [2024-11-26 17:38:12.146968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.221 [2024-11-26 17:38:12.147010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:35.789 17:38:13 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:35.789 17:38:13 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:41:35.789 17:38:13 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:41:35.789 17:38:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.789 17:38:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:35.789 [2024-11-26 17:38:13.188640] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:35.789 [2024-11-26 17:38:13.191804] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:35.789 17:38:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.790 17:38:13 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:41:35.790 17:38:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.790 17:38:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:36.049 malloc0 00:41:36.049 17:38:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.049 17:38:13 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:41:36.049 17:38:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.049 17:38:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:36.049 [2024-11-26 17:38:13.362834] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:41:36.049 [2024-11-26 17:38:13.362889] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:36.049 [2024-11-26 17:38:13.362900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:41:36.049 [2024-11-26 17:38:13.369690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:41:36.049 [2024-11-26 17:38:13.369721] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:41:36.049 1 00:41:36.049 17:38:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.049 17:38:13 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76577 00:41:36.987 [2024-11-26 17:38:14.367832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:41:36.987 [2024-11-26 17:38:14.377665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:41:36.987 [2024-11-26 17:38:14.377697] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:41:38.368 [2024-11-26 17:38:15.375820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:41:38.368 [2024-11-26 17:38:15.380656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:41:38.368 [2024-11-26 17:38:15.380678] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:41:38.937 [2024-11-26 17:38:16.378786] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:41:39.197 [2024-11-26 17:38:16.385632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:41:39.197 [2024-11-26 17:38:16.385656] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:41:39.197 [2024-11-26 17:38:16.385670] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:41:39.197 [2024-11-26 17:38:16.385788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:42:01.173 [2024-11-26 17:38:37.259667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:42:01.173 [2024-11-26 17:38:37.265985] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:42:01.173 [2024-11-26 17:38:37.271877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:42:01.173 [2024-11-26 17:38:37.271904] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:42:27.780 00:42:27.780 fio_test: (groupid=0, jobs=1): err= 0: pid=76583: Tue Nov 26 17:39:01 2024 00:42:27.780 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(2699MiB/60003msec) 00:42:27.780 slat (nsec): min=1629, max=1080.4k, avg=7750.72, stdev=3621.29 00:42:27.780 clat (usec): min=894, max=30564k, avg=5109.80, stdev=272641.03 00:42:27.780 lat (usec): min=901, max=30564k, avg=5117.55, stdev=272641.02 00:42:27.780 clat percentiles (usec): 00:42:27.780 | 1.00th=[ 1926], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:42:27.780 | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2442], 00:42:27.780 | 70.00th=[ 2540], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 4080], 00:42:27.780 | 99.00th=[ 5669], 99.50th=[ 6194], 99.90th=[ 7767], 99.95th=[ 8979], 00:42:27.780 | 99.99th=[13304] 00:42:27.780 bw ( KiB/s): min=21440, max=106736, per=100.00%, avg=92693.90, stdev=17758.80, samples=59 00:42:27.780 iops : min= 5360, max=26684, avg=23173.46, stdev=4439.69, samples=59 00:42:27.780 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(2696MiB/60003msec); 0 zone resets 00:42:27.780 slat (nsec): min=1744, max=1002.1k, avg=7869.72, stdev=3857.47 00:42:27.780 clat (usec): min=928, max=30564k, avg=5995.43, stdev=314296.81 00:42:27.780 lat (usec): min=935, max=30564k, avg=6003.30, stdev=314296.80 00:42:27.780 clat percentiles (usec): 00:42:27.780 | 1.00th=[ 1926], 5.00th=[ 2147], 10.00th=[ 2245], 00:42:27.780 | 20.00th=[ 2343], 30.00th=[ 2376], 40.00th=[ 2442], 00:42:27.780 | 50.00th=[ 2474], 60.00th=[ 2540], 70.00th=[ 2638], 00:42:27.780 | 80.00th=[ 3326], 90.00th=[ 3720], 95.00th=[ 4146], 00:42:27.780 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 8029], 00:42:27.780 | 99.95th=[ 9110], 99.99th=[17112761] 00:42:27.780 bw ( KiB/s): min=22528, max=106896, per=100.00%, avg=92579.27, stdev=17562.18, samples=59 00:42:27.780 iops : min= 5632, max=26724, avg=23144.81, stdev=4390.54, samples=59 00:42:27.780 lat (usec) : 1000=0.01% 00:42:27.780 lat (msec) : 2=1.89%, 4=92.42%, 10=5.65%, 20=0.03%, >=2000=0.01% 00:42:27.780 cpu : usr=5.28%, sys=18.63%, ctx=59947, majf=0, minf=13 00:42:27.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:42:27.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:27.780 issued rwts: total=690959,690089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:27.780 00:42:27.780 Run status group 0 (all jobs): 00:42:27.780 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=2699MiB (2830MB), run=60003-60003msec 00:42:27.780 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=2696MiB (2827MB), run=60003-60003msec 00:42:27.781 00:42:27.781 Disk stats (read/write): 00:42:27.781 ublkb1: ios=688983/688136, merge=0/0, ticks=3462756/3997500, in_queue=7460257, util=99.98% 00:42:27.781 17:39:01 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:42:27.781 17:39:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.781 17:39:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:27.781 [2024-11-26 17:39:01.991246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:42:27.781 [2024-11-26 17:39:02.037735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:27.781 [2024-11-26 17:39:02.042004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:42:27.781 [2024-11-26 17:39:02.052648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:27.781 [2024-11-26 17:39:02.052899] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:42:27.781 [2024-11-26 17:39:02.052938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.781 17:39:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:27.781 [2024-11-26 17:39:02.060799] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:27.781 [2024-11-26 17:39:02.069794] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:27.781 [2024-11-26 17:39:02.069864] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.781 17:39:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:27.781 17:39:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:42:27.781 17:39:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76688 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76688 ']' 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76688 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76688 00:42:27.781 killing process with pid 76688 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76688' 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76688 00:42:27.781 17:39:02 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76688 00:42:27.781 [2024-11-26 17:39:03.771888] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:27.781 [2024-11-26 17:39:03.772027] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:28.041 ************************************ 00:42:28.041 END TEST ublk_recovery 00:42:28.041 ************************************ 00:42:28.041 00:42:28.041 real 1m6.502s 00:42:28.041 user 1m52.147s 00:42:28.041 sys 0m24.473s 00:42:28.041 17:39:05 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:28.041 17:39:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:28.041 17:39:05 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:42:28.041 17:39:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:42:28.041 17:39:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:28.041 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:28.041 17:39:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:42:28.041 17:39:05 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:28.041 17:39:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:28.041 17:39:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:28.041 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:28.041 ************************************ 00:42:28.041 START TEST ftl 00:42:28.041 ************************************ 00:42:28.041 17:39:05 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:28.041 * Looking for test storage... 00:42:28.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:28.301 17:39:05 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:28.301 17:39:05 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:42:28.301 17:39:05 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:42:28.301 17:39:05 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:42:28.301 17:39:05 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:28.301 17:39:05 ftl -- scripts/common.sh@344 -- # case "$op" in 00:42:28.301 17:39:05 ftl -- scripts/common.sh@345 -- # : 1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:28.301 17:39:05 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:28.301 17:39:05 ftl -- scripts/common.sh@365 -- # decimal 1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@353 -- # local d=1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:28.301 17:39:05 ftl -- scripts/common.sh@355 -- # echo 1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:42:28.301 17:39:05 ftl -- scripts/common.sh@366 -- # decimal 2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@353 -- # local d=2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:28.301 17:39:05 ftl -- scripts/common.sh@355 -- # echo 2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:42:28.301 17:39:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:28.301 17:39:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:28.301 17:39:05 ftl -- scripts/common.sh@368 -- # return 0 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:28.301 --rc genhtml_branch_coverage=1 00:42:28.301 --rc genhtml_function_coverage=1 00:42:28.301 --rc genhtml_legend=1 00:42:28.301 --rc geninfo_all_blocks=1 00:42:28.301 --rc geninfo_unexecuted_blocks=1 00:42:28.301 00:42:28.301 ' 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:28.301 --rc genhtml_branch_coverage=1 00:42:28.301 --rc genhtml_function_coverage=1 00:42:28.301 --rc genhtml_legend=1 00:42:28.301 --rc geninfo_all_blocks=1 00:42:28.301 --rc geninfo_unexecuted_blocks=1 00:42:28.301 00:42:28.301 ' 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:28.301 --rc genhtml_branch_coverage=1 00:42:28.301 --rc genhtml_function_coverage=1 00:42:28.301 --rc genhtml_legend=1 00:42:28.301 --rc geninfo_all_blocks=1 00:42:28.301 --rc geninfo_unexecuted_blocks=1 00:42:28.301 00:42:28.301 ' 00:42:28.301 17:39:05 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:28.301 --rc genhtml_branch_coverage=1 00:42:28.301 --rc genhtml_function_coverage=1 00:42:28.301 --rc genhtml_legend=1 00:42:28.301 --rc geninfo_all_blocks=1 00:42:28.301 --rc geninfo_unexecuted_blocks=1 00:42:28.301 00:42:28.301 ' 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:28.301 17:39:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:28.301 17:39:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:28.301 17:39:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:28.301 17:39:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:28.301 17:39:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:28.301 17:39:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:28.301 17:39:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:28.301 17:39:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:28.301 17:39:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:28.301 17:39:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:28.301 17:39:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:28.301 17:39:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:28.301 17:39:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:28.301 17:39:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:28.301 17:39:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:28.301 17:39:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:28.301 17:39:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:28.301 17:39:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:28.301 17:39:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:28.301 17:39:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:28.301 17:39:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:28.301 17:39:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:28.301 17:39:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:28.301 17:39:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:28.301 17:39:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:28.301 17:39:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:28.301 17:39:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:28.301 17:39:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:42:28.301 17:39:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:28.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:29.131 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:29.131 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:29.132 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:29.132 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:29.132 17:39:06 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:42:29.132 17:39:06 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77495 00:42:29.132 17:39:06 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77495 00:42:29.132 17:39:06 ftl -- common/autotest_common.sh@835 -- # '[' -z 77495 ']' 00:42:29.132 17:39:06 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.132 17:39:06 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.132 17:39:06 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.132 17:39:06 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.132 17:39:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:29.132 [2024-11-26 17:39:06.526440] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:42:29.132 [2024-11-26 17:39:06.526580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77495 ] 00:42:29.391 [2024-11-26 17:39:06.706895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.652 [2024-11-26 17:39:06.846079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.912 17:39:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.912 17:39:07 ftl -- common/autotest_common.sh@868 -- # return 0 00:42:29.912 17:39:07 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:42:30.175 17:39:07 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:42:31.552 17:39:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:42:31.552 17:39:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@50 -- # break 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:42:32.120 17:39:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:42:32.379 17:39:09 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:42:32.379 17:39:09 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:42:32.379 17:39:09 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:42:32.379 17:39:09 ftl -- ftl/ftl.sh@63 -- # break 00:42:32.379 17:39:09 ftl -- ftl/ftl.sh@66 -- # killprocess 77495 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 77495 ']' 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@958 -- # kill -0 77495 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@959 -- # uname 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77495 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77495' 00:42:32.379 killing process with pid 77495 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@973 -- # kill 77495 00:42:32.379 17:39:09 ftl -- common/autotest_common.sh@978 -- # wait 77495 00:42:34.919 17:39:12 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:42:34.919 17:39:12 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:42:34.919 17:39:12 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:34.919 17:39:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.919 17:39:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:34.919 ************************************ 00:42:34.919 START TEST ftl_fio_basic 00:42:34.919 ************************************ 00:42:34.919 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:42:35.178 * Looking for test storage... 00:42:35.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:35.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.178 --rc genhtml_branch_coverage=1 00:42:35.178 --rc genhtml_function_coverage=1 00:42:35.178 --rc genhtml_legend=1 00:42:35.178 --rc geninfo_all_blocks=1 00:42:35.178 --rc geninfo_unexecuted_blocks=1 00:42:35.178 00:42:35.178 ' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:35.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.178 --rc genhtml_branch_coverage=1 00:42:35.178 --rc genhtml_function_coverage=1 00:42:35.178 --rc genhtml_legend=1 00:42:35.178 --rc geninfo_all_blocks=1 00:42:35.178 --rc geninfo_unexecuted_blocks=1 00:42:35.178 00:42:35.178 ' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:35.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.178 --rc genhtml_branch_coverage=1 00:42:35.178 --rc genhtml_function_coverage=1 00:42:35.178 --rc genhtml_legend=1 00:42:35.178 --rc geninfo_all_blocks=1 00:42:35.178 --rc geninfo_unexecuted_blocks=1 00:42:35.178 00:42:35.178 ' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:35.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.178 --rc genhtml_branch_coverage=1 00:42:35.178 --rc genhtml_function_coverage=1 00:42:35.178 --rc genhtml_legend=1 00:42:35.178 --rc geninfo_all_blocks=1 00:42:35.178 --rc geninfo_unexecuted_blocks=1 00:42:35.178 00:42:35.178 ' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:42:35.178 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77644 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77644 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77644 ']' 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:35.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:35.179 17:39:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:35.437 [2024-11-26 17:39:12.676505] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:42:35.437 [2024-11-26 17:39:12.676771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77644 ] 00:42:35.437 [2024-11-26 17:39:12.853702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:35.696 [2024-11-26 17:39:13.002844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.696 [2024-11-26 17:39:13.002974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.696 [2024-11-26 17:39:13.003020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:37.074 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:37.332 { 00:42:37.332 "name": "nvme0n1", 00:42:37.332 "aliases": [ 00:42:37.332 "b768ed5e-10df-4fe8-a12e-0163b686ca08" 00:42:37.332 ], 00:42:37.332 "product_name": "NVMe disk", 00:42:37.332 "block_size": 4096, 00:42:37.332 "num_blocks": 1310720, 00:42:37.332 "uuid": "b768ed5e-10df-4fe8-a12e-0163b686ca08", 00:42:37.332 "numa_id": -1, 00:42:37.332 "assigned_rate_limits": { 00:42:37.332 "rw_ios_per_sec": 0, 00:42:37.332 "rw_mbytes_per_sec": 0, 00:42:37.332 "r_mbytes_per_sec": 0, 00:42:37.332 "w_mbytes_per_sec": 0 00:42:37.332 }, 00:42:37.332 "claimed": false, 00:42:37.332 "zoned": false, 00:42:37.332 "supported_io_types": { 00:42:37.332 "read": true, 00:42:37.332 "write": true, 00:42:37.332 "unmap": true, 00:42:37.332 "flush": true, 00:42:37.332 "reset": true, 00:42:37.332 "nvme_admin": true, 00:42:37.332 "nvme_io": true, 00:42:37.332 "nvme_io_md": false, 00:42:37.332 "write_zeroes": true, 00:42:37.332 "zcopy": false, 00:42:37.332 "get_zone_info": false, 00:42:37.332 "zone_management": false, 00:42:37.332 "zone_append": false, 00:42:37.332 "compare": true, 00:42:37.332 "compare_and_write": false, 00:42:37.332 "abort": true, 00:42:37.332 "seek_hole": false, 00:42:37.332 "seek_data": false, 00:42:37.332 "copy": true, 00:42:37.332 "nvme_iov_md": false 00:42:37.332 }, 00:42:37.332 "driver_specific": { 00:42:37.332 "nvme": [ 00:42:37.332 { 00:42:37.332 "pci_address": "0000:00:11.0", 00:42:37.332 "trid": { 00:42:37.332 "trtype": "PCIe", 00:42:37.332 "traddr": "0000:00:11.0" 00:42:37.332 }, 00:42:37.332 "ctrlr_data": { 00:42:37.332 "cntlid": 0, 00:42:37.332 "vendor_id": "0x1b36", 00:42:37.332 "model_number": "QEMU NVMe Ctrl", 00:42:37.332 "serial_number": "12341", 00:42:37.332 "firmware_revision": "8.0.0", 00:42:37.332 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:37.332 "oacs": { 00:42:37.332 "security": 0, 00:42:37.332 "format": 1, 00:42:37.332 "firmware": 0, 00:42:37.332 "ns_manage": 1 00:42:37.332 }, 00:42:37.332 "multi_ctrlr": false, 00:42:37.332 "ana_reporting": false 00:42:37.332 }, 00:42:37.332 "vs": { 00:42:37.332 "nvme_version": "1.4" 00:42:37.332 }, 00:42:37.332 "ns_data": { 00:42:37.332 "id": 1, 00:42:37.332 "can_share": false 00:42:37.332 } 00:42:37.332 } 00:42:37.332 ], 00:42:37.332 "mp_policy": "active_passive" 00:42:37.332 } 00:42:37.332 } 00:42:37.332 ]' 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:37.332 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:37.591 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:42:37.591 17:39:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:37.849 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=66f66f8b-db01-4eaa-b4ed-345165ab63d9 00:42:37.849 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 66f66f8b-db01-4eaa-b4ed-345165ab63d9 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:38.107 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.365 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:38.365 { 00:42:38.365 "name": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:38.365 "aliases": [ 00:42:38.365 "lvs/nvme0n1p0" 00:42:38.365 ], 00:42:38.365 "product_name": "Logical Volume", 00:42:38.365 "block_size": 4096, 00:42:38.365 "num_blocks": 26476544, 00:42:38.365 "uuid": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:38.365 "assigned_rate_limits": { 00:42:38.365 "rw_ios_per_sec": 0, 00:42:38.365 "rw_mbytes_per_sec": 0, 00:42:38.365 "r_mbytes_per_sec": 0, 00:42:38.365 "w_mbytes_per_sec": 0 00:42:38.365 }, 00:42:38.365 "claimed": false, 00:42:38.365 "zoned": false, 00:42:38.365 "supported_io_types": { 00:42:38.365 "read": true, 00:42:38.365 "write": true, 00:42:38.365 "unmap": true, 00:42:38.365 "flush": false, 00:42:38.365 "reset": true, 00:42:38.365 "nvme_admin": false, 00:42:38.365 "nvme_io": false, 00:42:38.365 "nvme_io_md": false, 00:42:38.365 "write_zeroes": true, 00:42:38.365 "zcopy": false, 00:42:38.365 "get_zone_info": false, 00:42:38.365 "zone_management": false, 00:42:38.365 "zone_append": false, 00:42:38.365 "compare": false, 00:42:38.365 "compare_and_write": false, 00:42:38.365 "abort": false, 00:42:38.365 "seek_hole": true, 00:42:38.365 "seek_data": true, 00:42:38.365 "copy": false, 00:42:38.365 "nvme_iov_md": false 00:42:38.365 }, 00:42:38.365 "driver_specific": { 00:42:38.365 "lvol": { 00:42:38.365 "lvol_store_uuid": "66f66f8b-db01-4eaa-b4ed-345165ab63d9", 00:42:38.365 "base_bdev": "nvme0n1", 00:42:38.365 "thin_provision": true, 00:42:38.365 "num_allocated_clusters": 0, 00:42:38.365 "snapshot": false, 00:42:38.365 "clone": false, 00:42:38.365 "esnap_clone": false 00:42:38.365 } 00:42:38.365 } 00:42:38.365 } 00:42:38.365 ]' 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:42:38.366 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:38.624 17:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:38.882 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:38.882 { 00:42:38.882 "name": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:38.882 "aliases": [ 00:42:38.882 "lvs/nvme0n1p0" 00:42:38.882 ], 00:42:38.882 "product_name": "Logical Volume", 00:42:38.882 "block_size": 4096, 00:42:38.882 "num_blocks": 26476544, 00:42:38.882 "uuid": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:38.882 "assigned_rate_limits": { 00:42:38.882 "rw_ios_per_sec": 0, 00:42:38.882 "rw_mbytes_per_sec": 0, 00:42:38.882 "r_mbytes_per_sec": 0, 00:42:38.882 "w_mbytes_per_sec": 0 00:42:38.882 }, 00:42:38.882 "claimed": false, 00:42:38.882 "zoned": false, 00:42:38.882 "supported_io_types": { 00:42:38.882 "read": true, 00:42:38.882 "write": true, 00:42:38.882 "unmap": true, 00:42:38.882 "flush": false, 00:42:38.882 "reset": true, 00:42:38.882 "nvme_admin": false, 00:42:38.882 "nvme_io": false, 00:42:38.882 "nvme_io_md": false, 00:42:38.882 "write_zeroes": true, 00:42:38.882 "zcopy": false, 00:42:38.882 "get_zone_info": false, 00:42:38.882 "zone_management": false, 00:42:38.882 "zone_append": false, 00:42:38.882 "compare": false, 00:42:38.882 "compare_and_write": false, 00:42:38.882 "abort": false, 00:42:38.882 "seek_hole": true, 00:42:38.882 "seek_data": true, 00:42:38.882 "copy": false, 00:42:38.882 "nvme_iov_md": false 00:42:38.882 }, 00:42:38.882 "driver_specific": { 00:42:38.882 "lvol": { 00:42:38.882 "lvol_store_uuid": "66f66f8b-db01-4eaa-b4ed-345165ab63d9", 00:42:38.882 "base_bdev": "nvme0n1", 00:42:38.883 "thin_provision": true, 00:42:38.883 "num_allocated_clusters": 0, 00:42:38.883 "snapshot": false, 00:42:38.883 "clone": false, 00:42:38.883 "esnap_clone": false 00:42:38.883 } 00:42:38.883 } 00:42:38.883 } 00:42:38.883 ]' 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:42:38.883 17:39:16 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:42:39.142 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:39.142 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31a71a5f-57bf-424f-984c-c3ea80121b25 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:39.401 { 00:42:39.401 "name": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:39.401 "aliases": [ 00:42:39.401 "lvs/nvme0n1p0" 00:42:39.401 ], 00:42:39.401 "product_name": "Logical Volume", 00:42:39.401 "block_size": 4096, 00:42:39.401 "num_blocks": 26476544, 00:42:39.401 "uuid": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:39.401 "assigned_rate_limits": { 00:42:39.401 "rw_ios_per_sec": 0, 00:42:39.401 "rw_mbytes_per_sec": 0, 00:42:39.401 "r_mbytes_per_sec": 0, 00:42:39.401 "w_mbytes_per_sec": 0 00:42:39.401 }, 00:42:39.401 "claimed": false, 00:42:39.401 "zoned": false, 00:42:39.401 "supported_io_types": { 00:42:39.401 "read": true, 00:42:39.401 "write": true, 00:42:39.401 "unmap": true, 00:42:39.401 "flush": false, 00:42:39.401 "reset": true, 00:42:39.401 "nvme_admin": false, 00:42:39.401 "nvme_io": false, 00:42:39.401 "nvme_io_md": false, 00:42:39.401 "write_zeroes": true, 00:42:39.401 "zcopy": false, 00:42:39.401 "get_zone_info": false, 00:42:39.401 "zone_management": false, 00:42:39.401 "zone_append": false, 00:42:39.401 "compare": false, 00:42:39.401 "compare_and_write": false, 00:42:39.401 "abort": false, 00:42:39.401 "seek_hole": true, 00:42:39.401 "seek_data": true, 00:42:39.401 "copy": false, 00:42:39.401 "nvme_iov_md": false 00:42:39.401 }, 00:42:39.401 "driver_specific": { 00:42:39.401 "lvol": { 00:42:39.401 "lvol_store_uuid": "66f66f8b-db01-4eaa-b4ed-345165ab63d9", 00:42:39.401 "base_bdev": "nvme0n1", 00:42:39.401 "thin_provision": true, 00:42:39.401 "num_allocated_clusters": 0, 00:42:39.401 "snapshot": false, 00:42:39.401 "clone": false, 00:42:39.401 "esnap_clone": false 00:42:39.401 } 00:42:39.401 } 00:42:39.401 } 00:42:39.401 ]' 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:42:39.401 17:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 31a71a5f-57bf-424f-984c-c3ea80121b25 -c nvc0n1p0 --l2p_dram_limit 60 00:42:39.661 [2024-11-26 17:39:16.956809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.956868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:39.661 [2024-11-26 17:39:16.956887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:42:39.661 [2024-11-26 17:39:16.956896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.957000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.957013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:39.661 [2024-11-26 17:39:16.957024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:42:39.661 [2024-11-26 17:39:16.957032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.957068] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:39.661 [2024-11-26 17:39:16.958353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:39.661 [2024-11-26 17:39:16.958437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.958468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:39.661 [2024-11-26 17:39:16.958508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:42:39.661 [2024-11-26 17:39:16.958529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.958644] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 39ca1966-46a0-4862-a57c-60297cde8375 00:42:39.661 [2024-11-26 17:39:16.961369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.961450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:39.661 [2024-11-26 17:39:16.961485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:42:39.661 [2024-11-26 17:39:16.961516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.976252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.976368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:39.661 [2024-11-26 17:39:16.976416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:42:39.661 [2024-11-26 17:39:16.976447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.976627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.976675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:39.661 [2024-11-26 17:39:16.976706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:42:39.661 [2024-11-26 17:39:16.976743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.976850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.976890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:39.661 [2024-11-26 17:39:16.976920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:42:39.661 [2024-11-26 17:39:16.976955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.977014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:39.661 [2024-11-26 17:39:16.983491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.983566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:39.661 [2024-11-26 17:39:16.983618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.499 ms 00:42:39.661 [2024-11-26 17:39:16.983648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.983713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.983746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:39.661 [2024-11-26 17:39:16.983779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:39.661 [2024-11-26 17:39:16.983808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.983889] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:39.661 [2024-11-26 17:39:16.984093] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:39.661 [2024-11-26 17:39:16.984147] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:39.661 [2024-11-26 17:39:16.984161] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:39.661 [2024-11-26 17:39:16.984183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:39.661 [2024-11-26 17:39:16.984193] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:39.661 [2024-11-26 17:39:16.984205] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:39.661 [2024-11-26 17:39:16.984214] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:39.661 [2024-11-26 17:39:16.984225] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:39.661 [2024-11-26 17:39:16.984233] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:39.661 [2024-11-26 17:39:16.984248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.984256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:39.661 [2024-11-26 17:39:16.984270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:42:39.661 [2024-11-26 17:39:16.984278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.984375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.661 [2024-11-26 17:39:16.984385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:39.661 [2024-11-26 17:39:16.984396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:42:39.661 [2024-11-26 17:39:16.984404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.661 [2024-11-26 17:39:16.984527] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:39.661 [2024-11-26 17:39:16.984541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:39.661 [2024-11-26 17:39:16.984552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:39.661 [2024-11-26 17:39:16.984560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:39.661 [2024-11-26 17:39:16.984571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:39.661 [2024-11-26 17:39:16.984578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:39.661 [2024-11-26 17:39:16.984588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:39.662 [2024-11-26 17:39:16.984619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:39.662 [2024-11-26 17:39:16.984636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:39.662 [2024-11-26 17:39:16.984644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:39.662 [2024-11-26 17:39:16.984657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:39.662 [2024-11-26 17:39:16.984665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:39.662 [2024-11-26 17:39:16.984675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:39.662 [2024-11-26 17:39:16.984682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:39.662 [2024-11-26 17:39:16.984705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:39.662 [2024-11-26 17:39:16.984731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:39.662 [2024-11-26 17:39:16.984755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:39.662 [2024-11-26 17:39:16.984780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:39.662 [2024-11-26 17:39:16.984803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:39.662 [2024-11-26 17:39:16.984842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:39.662 [2024-11-26 17:39:16.984878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:39.662 [2024-11-26 17:39:16.984885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:39.662 [2024-11-26 17:39:16.984894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:39.662 [2024-11-26 17:39:16.984900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:39.662 [2024-11-26 17:39:16.984910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:39.662 [2024-11-26 17:39:16.984917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:39.662 [2024-11-26 17:39:16.984933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:39.662 [2024-11-26 17:39:16.984943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984952] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:39.662 [2024-11-26 17:39:16.984964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:39.662 [2024-11-26 17:39:16.984972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:39.662 [2024-11-26 17:39:16.984982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:39.662 [2024-11-26 17:39:16.984990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:39.662 [2024-11-26 17:39:16.985003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:39.662 [2024-11-26 17:39:16.985010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:39.662 [2024-11-26 17:39:16.985020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:39.662 [2024-11-26 17:39:16.985027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:39.662 [2024-11-26 17:39:16.985036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:39.662 [2024-11-26 17:39:16.985049] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:39.662 [2024-11-26 17:39:16.985062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:39.662 [2024-11-26 17:39:16.985081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:39.662 [2024-11-26 17:39:16.985088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:39.662 [2024-11-26 17:39:16.985099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:39.662 [2024-11-26 17:39:16.985107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:39.662 [2024-11-26 17:39:16.985117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:39.662 [2024-11-26 17:39:16.985124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:39.662 [2024-11-26 17:39:16.985134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:39.662 [2024-11-26 17:39:16.985141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:39.662 [2024-11-26 17:39:16.985153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:39.662 [2024-11-26 17:39:16.985196] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:39.662 [2024-11-26 17:39:16.985214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:39.662 [2024-11-26 17:39:16.985233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:39.662 [2024-11-26 17:39:16.985240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:39.662 [2024-11-26 17:39:16.985251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:39.662 [2024-11-26 17:39:16.985262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.662 [2024-11-26 17:39:16.985277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:39.662 [2024-11-26 17:39:16.985285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:42:39.662 [2024-11-26 17:39:16.985302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.662 [2024-11-26 17:39:16.985378] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:42:39.662 [2024-11-26 17:39:16.985401] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:42:43.854 [2024-11-26 17:39:20.986104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.854 [2024-11-26 17:39:20.986213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:42:43.854 [2024-11-26 17:39:20.986234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4008.438 ms 00:42:43.854 [2024-11-26 17:39:20.986246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.854 [2024-11-26 17:39:21.034317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.854 [2024-11-26 17:39:21.034391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:43.854 [2024-11-26 17:39:21.034408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.817 ms 00:42:43.854 [2024-11-26 17:39:21.034421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.854 [2024-11-26 17:39:21.034627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.854 [2024-11-26 17:39:21.034643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:43.854 [2024-11-26 17:39:21.034653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:42:43.854 [2024-11-26 17:39:21.034668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.854 [2024-11-26 17:39:21.100711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.854 [2024-11-26 17:39:21.100886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:43.854 [2024-11-26 17:39:21.100906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.114 ms 00:42:43.854 [2024-11-26 17:39:21.100919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.854 [2024-11-26 17:39:21.100977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.854 [2024-11-26 17:39:21.100989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:43.854 [2024-11-26 17:39:21.100998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:43.854 [2024-11-26 17:39:21.101008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.855 [2024-11-26 17:39:21.101930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.855 [2024-11-26 17:39:21.101956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:43.855 [2024-11-26 17:39:21.101969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:42:43.855 [2024-11-26 17:39:21.101980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.855 [2024-11-26 17:39:21.102136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.855 [2024-11-26 17:39:21.102151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:43.855 [2024-11-26 17:39:21.102160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:42:43.855 [2024-11-26 17:39:21.102174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.855 [2024-11-26 17:39:21.128596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.855 [2024-11-26 17:39:21.128660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:43.855 [2024-11-26 17:39:21.128674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.443 ms 00:42:43.855 [2024-11-26 17:39:21.128700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.855 [2024-11-26 17:39:21.143792] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:42:43.855 [2024-11-26 17:39:21.172268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.855 [2024-11-26 17:39:21.172349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:43.855 [2024-11-26 17:39:21.172388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.478 ms 00:42:43.855 [2024-11-26 17:39:21.172396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.855 [2024-11-26 17:39:21.269919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.855 [2024-11-26 17:39:21.269999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:42:43.855 [2024-11-26 17:39:21.270025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.632 ms 00:42:43.855 [2024-11-26 17:39:21.270033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:43.855 [2024-11-26 17:39:21.270313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:43.855 [2024-11-26 17:39:21.270327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:43.855 [2024-11-26 17:39:21.270342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:42:43.855 [2024-11-26 17:39:21.270350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.113 [2024-11-26 17:39:21.308465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.113 [2024-11-26 17:39:21.308522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:42:44.113 [2024-11-26 17:39:21.308539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.117 ms 00:42:44.113 [2024-11-26 17:39:21.308564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.113 [2024-11-26 17:39:21.344936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.113 [2024-11-26 17:39:21.345002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:42:44.113 [2024-11-26 17:39:21.345020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.373 ms 00:42:44.113 [2024-11-26 17:39:21.345027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.113 [2024-11-26 17:39:21.345841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.113 [2024-11-26 17:39:21.345864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:44.113 [2024-11-26 17:39:21.345877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:42:44.113 [2024-11-26 17:39:21.345885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.113 [2024-11-26 17:39:21.457523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.113 [2024-11-26 17:39:21.457591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:42:44.113 [2024-11-26 17:39:21.457628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.776 ms 00:42:44.113 [2024-11-26 17:39:21.457637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.113 [2024-11-26 17:39:21.497687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.113 [2024-11-26 17:39:21.497748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:42:44.113 [2024-11-26 17:39:21.497768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.027 ms 00:42:44.113 [2024-11-26 17:39:21.497777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.114 [2024-11-26 17:39:21.534593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.114 [2024-11-26 17:39:21.534730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:42:44.114 [2024-11-26 17:39:21.534751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.844 ms 00:42:44.114 [2024-11-26 17:39:21.534759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.372 [2024-11-26 17:39:21.572856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.372 [2024-11-26 17:39:21.572916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:44.372 [2024-11-26 17:39:21.572934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.127 ms 00:42:44.372 [2024-11-26 17:39:21.572943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.372 [2024-11-26 17:39:21.573007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.372 [2024-11-26 17:39:21.573018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:44.372 [2024-11-26 17:39:21.573037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:44.372 [2024-11-26 17:39:21.573045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.372 [2024-11-26 17:39:21.573263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:44.372 [2024-11-26 17:39:21.573277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:44.372 [2024-11-26 17:39:21.573296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:42:44.372 [2024-11-26 17:39:21.573309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:44.372 [2024-11-26 17:39:21.575247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4626.549 ms, result 0 00:42:44.372 { 00:42:44.372 "name": "ftl0", 00:42:44.372 "uuid": "39ca1966-46a0-4862-a57c-60297cde8375" 00:42:44.372 } 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:44.372 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:44.631 17:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:42:44.631 [ 00:42:44.631 { 00:42:44.631 "name": "ftl0", 00:42:44.631 "aliases": [ 00:42:44.631 "39ca1966-46a0-4862-a57c-60297cde8375" 00:42:44.631 ], 00:42:44.631 "product_name": "FTL disk", 00:42:44.631 "block_size": 4096, 00:42:44.631 "num_blocks": 20971520, 00:42:44.631 "uuid": "39ca1966-46a0-4862-a57c-60297cde8375", 00:42:44.631 "assigned_rate_limits": { 00:42:44.631 "rw_ios_per_sec": 0, 00:42:44.631 "rw_mbytes_per_sec": 0, 00:42:44.631 "r_mbytes_per_sec": 0, 00:42:44.631 "w_mbytes_per_sec": 0 00:42:44.631 }, 00:42:44.631 "claimed": false, 00:42:44.631 "zoned": false, 00:42:44.631 "supported_io_types": { 00:42:44.631 "read": true, 00:42:44.631 "write": true, 00:42:44.631 "unmap": true, 00:42:44.631 "flush": true, 00:42:44.631 "reset": false, 00:42:44.631 "nvme_admin": false, 00:42:44.631 "nvme_io": false, 00:42:44.631 "nvme_io_md": false, 00:42:44.631 "write_zeroes": true, 00:42:44.631 "zcopy": false, 00:42:44.631 "get_zone_info": false, 00:42:44.631 "zone_management": false, 00:42:44.631 "zone_append": false, 00:42:44.631 "compare": false, 00:42:44.631 "compare_and_write": false, 00:42:44.631 "abort": false, 00:42:44.631 "seek_hole": false, 00:42:44.631 "seek_data": false, 00:42:44.631 "copy": false, 00:42:44.631 "nvme_iov_md": false 00:42:44.631 }, 00:42:44.631 "driver_specific": { 00:42:44.631 "ftl": { 00:42:44.631 "base_bdev": "31a71a5f-57bf-424f-984c-c3ea80121b25", 00:42:44.631 "cache": "nvc0n1p0" 00:42:44.631 } 00:42:44.631 } 00:42:44.631 } 00:42:44.631 ] 00:42:44.631 17:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:42:44.631 17:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:42:44.631 17:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:42:44.889 17:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:42:44.890 17:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:42:45.148 [2024-11-26 17:39:22.421536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.421719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:45.148 [2024-11-26 17:39:22.421742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:45.148 [2024-11-26 17:39:22.421758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.421797] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:45.148 [2024-11-26 17:39:22.426693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.426732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:45.148 [2024-11-26 17:39:22.426750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:42:45.148 [2024-11-26 17:39:22.426759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.427271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.427294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:45.148 [2024-11-26 17:39:22.427308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:42:45.148 [2024-11-26 17:39:22.427316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.430120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.430147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:45.148 [2024-11-26 17:39:22.430160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:42:45.148 [2024-11-26 17:39:22.430169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.435314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.435348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:45.148 [2024-11-26 17:39:22.435361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.120 ms 00:42:45.148 [2024-11-26 17:39:22.435368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.475284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.475335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:45.148 [2024-11-26 17:39:22.475644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.911 ms 00:42:45.148 [2024-11-26 17:39:22.475654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.498446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.498498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:45.148 [2024-11-26 17:39:22.498518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.766 ms 00:42:45.148 [2024-11-26 17:39:22.498527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.148 [2024-11-26 17:39:22.498772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.148 [2024-11-26 17:39:22.498786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:45.148 [2024-11-26 17:39:22.498799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:42:45.149 [2024-11-26 17:39:22.498807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.149 [2024-11-26 17:39:22.535611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.149 [2024-11-26 17:39:22.535656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:45.149 [2024-11-26 17:39:22.535672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.841 ms 00:42:45.149 [2024-11-26 17:39:22.535679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.149 [2024-11-26 17:39:22.571976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.149 [2024-11-26 17:39:22.572105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:45.149 [2024-11-26 17:39:22.572125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.310 ms 00:42:45.149 [2024-11-26 17:39:22.572133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.408 [2024-11-26 17:39:22.610352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.408 [2024-11-26 17:39:22.610495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:45.408 [2024-11-26 17:39:22.610518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.230 ms 00:42:45.408 [2024-11-26 17:39:22.610528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.408 [2024-11-26 17:39:22.648280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.408 [2024-11-26 17:39:22.648327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:45.408 [2024-11-26 17:39:22.648343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.627 ms 00:42:45.408 [2024-11-26 17:39:22.648351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.408 [2024-11-26 17:39:22.648403] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:45.408 [2024-11-26 17:39:22.648420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:45.408 [2024-11-26 17:39:22.648694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.648992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:45.409 [2024-11-26 17:39:22.649470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:45.409 [2024-11-26 17:39:22.649481] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39ca1966-46a0-4862-a57c-60297cde8375 00:42:45.409 [2024-11-26 17:39:22.649489] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:45.409 [2024-11-26 17:39:22.649502] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:45.409 [2024-11-26 17:39:22.649513] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:45.409 [2024-11-26 17:39:22.649524] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:45.409 [2024-11-26 17:39:22.649532] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:45.409 [2024-11-26 17:39:22.649542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:45.409 [2024-11-26 17:39:22.649550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:45.409 [2024-11-26 17:39:22.649560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:45.409 [2024-11-26 17:39:22.649566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:45.409 [2024-11-26 17:39:22.649576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.409 [2024-11-26 17:39:22.649585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:45.409 [2024-11-26 17:39:22.649596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:42:45.409 [2024-11-26 17:39:22.649604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.409 [2024-11-26 17:39:22.671202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.409 [2024-11-26 17:39:22.671243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:45.409 [2024-11-26 17:39:22.671259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.567 ms 00:42:45.409 [2024-11-26 17:39:22.671267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.409 [2024-11-26 17:39:22.671933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:45.409 [2024-11-26 17:39:22.671944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:45.409 [2024-11-26 17:39:22.671956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:42:45.409 [2024-11-26 17:39:22.671964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.409 [2024-11-26 17:39:22.745681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.409 [2024-11-26 17:39:22.745743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:45.409 [2024-11-26 17:39:22.745759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.409 [2024-11-26 17:39:22.745768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.409 [2024-11-26 17:39:22.745854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.409 [2024-11-26 17:39:22.745863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:45.409 [2024-11-26 17:39:22.745874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.409 [2024-11-26 17:39:22.745882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.409 [2024-11-26 17:39:22.746025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.409 [2024-11-26 17:39:22.746042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:45.409 [2024-11-26 17:39:22.746053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.409 [2024-11-26 17:39:22.746061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.409 [2024-11-26 17:39:22.746105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.409 [2024-11-26 17:39:22.746114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:45.409 [2024-11-26 17:39:22.746125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.409 [2024-11-26 17:39:22.746135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:22.896172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:22.896246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:45.668 [2024-11-26 17:39:22.896265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:22.896274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.008029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.008193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:45.668 [2024-11-26 17:39:23.008215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.008224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.008385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.008396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:45.668 [2024-11-26 17:39:23.008411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.008420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.008503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.008513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:45.668 [2024-11-26 17:39:23.008524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.008532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.008695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.008709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:45.668 [2024-11-26 17:39:23.008724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.008733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.008803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.008815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:45.668 [2024-11-26 17:39:23.008826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.008833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.008894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.008904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:45.668 [2024-11-26 17:39:23.008914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.008933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.009001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:45.668 [2024-11-26 17:39:23.009011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:45.668 [2024-11-26 17:39:23.009022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:45.668 [2024-11-26 17:39:23.009029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:45.668 [2024-11-26 17:39:23.009252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 588.792 ms, result 0 00:42:45.668 true 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77644 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77644 ']' 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77644 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77644 00:42:45.668 killing process with pid 77644 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77644' 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77644 00:42:45.668 17:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77644 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:53.783 17:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:42:53.783 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:42:53.783 fio-3.35 00:42:53.783 Starting 1 thread 00:42:59.079 00:42:59.079 test: (groupid=0, jobs=1): err= 0: pid=77877: Tue Nov 26 17:39:35 2024 00:42:59.079 read: IOPS=1008, BW=67.0MiB/s (70.2MB/s)(255MiB/3800msec) 00:42:59.079 slat (nsec): min=4385, max=33395, avg=6665.02, stdev=2828.28 00:42:59.079 clat (usec): min=273, max=795, avg=445.64, stdev=55.25 00:42:59.079 lat (usec): min=280, max=802, avg=452.31, stdev=55.51 00:42:59.079 clat percentiles (usec): 00:42:59.079 | 1.00th=[ 318], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 392], 00:42:59.079 | 30.00th=[ 408], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 457], 00:42:59.079 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 519], 95.00th=[ 529], 00:42:59.079 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 766], 99.95th=[ 783], 00:42:59.079 | 99.99th=[ 799] 00:42:59.079 write: IOPS=1016, BW=67.5MiB/s (70.8MB/s)(256MiB/3795msec); 0 zone resets 00:42:59.079 slat (nsec): min=15701, max=91510, avg=21372.46, stdev=5450.96 00:42:59.079 clat (usec): min=340, max=990, avg=501.99, stdev=70.19 00:42:59.079 lat (usec): min=359, max=1012, avg=523.37, stdev=70.52 00:42:59.079 clat percentiles (usec): 00:42:59.079 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 461], 00:42:59.079 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 515], 00:42:59.079 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 603], 00:42:59.079 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 955], 00:42:59.079 | 99.99th=[ 988] 00:42:59.079 bw ( KiB/s): min=65960, max=70312, per=99.96%, avg=69068.57, stdev=1561.39, samples=7 00:42:59.079 iops : min= 970, max= 1034, avg=1015.71, stdev=22.96, samples=7 00:42:59.079 lat (usec) : 500=69.32%, 750=29.90%, 1000=0.78% 00:42:59.079 cpu : usr=99.26%, sys=0.11%, ctx=8, majf=0, minf=1169 00:42:59.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:59.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.079 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:59.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:59.079 00:42:59.079 Run status group 0 (all jobs): 00:42:59.079 READ: bw=67.0MiB/s (70.2MB/s), 67.0MiB/s-67.0MiB/s (70.2MB/s-70.2MB/s), io=255MiB (267MB), run=3800-3800msec 00:42:59.079 WRITE: bw=67.5MiB/s (70.8MB/s), 67.5MiB/s-67.5MiB/s (70.8MB/s-70.8MB/s), io=256MiB (269MB), run=3795-3795msec 00:43:00.131 ----------------------------------------------------- 00:43:00.131 Suppressions used: 00:43:00.131 count bytes template 00:43:00.131 1 5 /usr/src/fio/parse.c 00:43:00.131 1 8 libtcmalloc_minimal.so 00:43:00.131 1 904 libcrypto.so 00:43:00.131 ----------------------------------------------------- 00:43:00.131 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:00.390 17:39:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:43:00.649 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:00.649 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:00.649 fio-3.35 00:43:00.649 Starting 2 threads 00:43:32.753 00:43:32.753 first_half: (groupid=0, jobs=1): err= 0: pid=77986: Tue Nov 26 17:40:05 2024 00:43:32.753 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(255MiB/25506msec) 00:43:32.753 slat (nsec): min=3955, max=37159, avg=6900.92, stdev=1818.60 00:43:32.753 clat (usec): min=1117, max=315155, avg=38805.45, stdev=20292.00 00:43:32.753 lat (usec): min=1127, max=315159, avg=38812.35, stdev=20292.27 00:43:32.753 clat percentiles (msec): 00:43:32.753 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:32.753 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:43:32.753 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 62], 00:43:32.753 | 99.00th=[ 153], 99.50th=[ 178], 99.90th=[ 211], 99.95th=[ 264], 00:43:32.753 | 99.99th=[ 305] 00:43:32.753 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(256MiB/21398msec); 0 zone resets 00:43:32.753 slat (usec): min=4, max=687, avg= 9.13, stdev= 7.06 00:43:32.753 clat (usec): min=407, max=95890, avg=11115.43, stdev=18022.82 00:43:32.753 lat (usec): min=416, max=95900, avg=11124.56, stdev=18022.95 00:43:32.753 clat percentiles (usec): 00:43:32.753 | 1.00th=[ 1123], 5.00th=[ 1516], 10.00th=[ 1778], 20.00th=[ 2180], 00:43:32.753 | 30.00th=[ 3392], 40.00th=[ 5342], 50.00th=[ 6521], 60.00th=[ 7504], 00:43:32.753 | 70.00th=[ 8717], 80.00th=[12256], 90.00th=[15926], 95.00th=[52691], 00:43:32.753 | 99.00th=[90702], 99.50th=[91751], 99.90th=[93848], 99.95th=[94897], 00:43:32.753 | 99.99th=[94897] 00:43:32.753 bw ( KiB/s): min= 1256, max=39672, per=100.00%, avg=22795.13, stdev=12045.76, samples=23 00:43:32.753 iops : min= 314, max= 9918, avg=5698.78, stdev=3011.44, samples=23 00:43:32.753 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.21% 00:43:32.753 lat (msec) : 2=7.85%, 4=8.72%, 10=21.24%, 20=8.70%, 50=46.27% 00:43:32.753 lat (msec) : 100=5.72%, 250=1.21%, 500=0.03% 00:43:32.753 cpu : usr=99.20%, sys=0.21%, ctx=77, majf=0, minf=5573 00:43:32.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:43:32.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.753 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:32.753 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:32.753 second_half: (groupid=0, jobs=1): err= 0: pid=77987: Tue Nov 26 17:40:05 2024 00:43:32.753 read: IOPS=2541, BW=9.93MiB/s (10.4MB/s)(255MiB/25697msec) 00:43:32.753 slat (nsec): min=3968, max=29292, avg=6756.52, stdev=1703.77 00:43:32.753 clat (usec): min=1208, max=319328, avg=38163.71, stdev=24119.27 00:43:32.753 lat (usec): min=1216, max=319335, avg=38170.47, stdev=24119.59 00:43:32.754 clat percentiles (msec): 00:43:32.754 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:43:32.754 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:43:32.754 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 58], 00:43:32.754 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 220], 99.95th=[ 247], 00:43:32.754 | 99.99th=[ 313] 00:43:32.754 write: IOPS=2796, BW=10.9MiB/s (11.5MB/s)(256MiB/23439msec); 0 zone resets 00:43:32.754 slat (usec): min=4, max=667, avg= 9.07, stdev= 5.36 00:43:32.754 clat (usec): min=437, max=96760, avg=12131.83, stdev=19571.70 00:43:32.754 lat (usec): min=453, max=96767, avg=12140.90, stdev=19571.88 00:43:32.754 clat percentiles (usec): 00:43:32.754 | 1.00th=[ 1106], 5.00th=[ 1418], 10.00th=[ 1647], 20.00th=[ 1942], 00:43:32.754 | 30.00th=[ 2278], 40.00th=[ 3425], 50.00th=[ 5276], 60.00th=[ 7046], 00:43:32.754 | 70.00th=[ 8848], 80.00th=[13829], 90.00th=[35914], 95.00th=[54789], 00:43:32.754 | 99.00th=[91751], 99.50th=[92799], 99.90th=[94897], 99.95th=[95945], 00:43:32.754 | 99.99th=[95945] 00:43:32.754 bw ( KiB/s): min= 208, max=55680, per=86.82%, avg=19420.44, stdev=14373.29, samples=27 00:43:32.754 iops : min= 52, max=13920, avg=4855.11, stdev=3593.32, samples=27 00:43:32.754 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.18% 00:43:32.754 lat (msec) : 2=10.93%, 4=10.63%, 10=16.29%, 20=7.70%, 50=48.72% 00:43:32.754 lat (msec) : 100=3.93%, 250=1.55%, 500=0.02% 00:43:32.754 cpu : usr=99.30%, sys=0.15%, ctx=46, majf=0, minf=5538 00:43:32.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:43:32.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.754 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:32.754 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:32.754 00:43:32.754 Run status group 0 (all jobs): 00:43:32.754 READ: bw=19.9MiB/s (20.8MB/s), 9.93MiB/s-10.0MiB/s (10.4MB/s-10.5MB/s), io=510MiB (535MB), run=25506-25697msec 00:43:32.754 WRITE: bw=21.8MiB/s (22.9MB/s), 10.9MiB/s-12.0MiB/s (11.5MB/s-12.5MB/s), io=512MiB (537MB), run=21398-23439msec 00:43:32.754 ----------------------------------------------------- 00:43:32.754 Suppressions used: 00:43:32.754 count bytes template 00:43:32.754 2 10 /usr/src/fio/parse.c 00:43:32.754 4 384 /usr/src/fio/iolog.c 00:43:32.754 1 8 libtcmalloc_minimal.so 00:43:32.754 1 904 libcrypto.so 00:43:32.754 ----------------------------------------------------- 00:43:32.754 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:32.754 17:40:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:32.754 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:32.754 fio-3.35 00:43:32.754 Starting 1 thread 00:43:47.634 00:43:47.634 test: (groupid=0, jobs=1): err= 0: pid=78321: Tue Nov 26 17:40:23 2024 00:43:47.634 read: IOPS=7571, BW=29.6MiB/s (31.0MB/s)(255MiB/8612msec) 00:43:47.634 slat (nsec): min=3795, max=30562, avg=6014.30, stdev=1439.96 00:43:47.634 clat (usec): min=745, max=33118, avg=16896.65, stdev=889.23 00:43:47.634 lat (usec): min=750, max=33124, avg=16902.67, stdev=889.20 00:43:47.634 clat percentiles (usec): 00:43:47.634 | 1.00th=[16057], 5.00th=[16319], 10.00th=[16319], 20.00th=[16581], 00:43:47.634 | 30.00th=[16581], 40.00th=[16712], 50.00th=[16909], 60.00th=[16909], 00:43:47.634 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17433], 00:43:47.634 | 99.00th=[19530], 99.50th=[19792], 99.90th=[28443], 99.95th=[29230], 00:43:47.634 | 99.99th=[32375] 00:43:47.634 write: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(256MiB/5390msec); 0 zone resets 00:43:47.634 slat (usec): min=4, max=700, avg= 9.22, stdev= 7.98 00:43:47.634 clat (usec): min=643, max=58078, avg=10475.05, stdev=12715.27 00:43:47.634 lat (usec): min=651, max=58086, avg=10484.27, stdev=12715.27 00:43:47.634 clat percentiles (usec): 00:43:47.634 | 1.00th=[ 1106], 5.00th=[ 1336], 10.00th=[ 1500], 20.00th=[ 1696], 00:43:47.634 | 30.00th=[ 1876], 40.00th=[ 2212], 50.00th=[ 6915], 60.00th=[ 8160], 00:43:47.634 | 70.00th=[ 9241], 80.00th=[11207], 90.00th=[38011], 95.00th=[39584], 00:43:47.634 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[46924], 00:43:47.634 | 99.99th=[54264] 00:43:47.634 bw ( KiB/s): min=33000, max=62888, per=97.98%, avg=47653.09, stdev=9511.41, samples=11 00:43:47.634 iops : min= 8250, max=15722, avg=11913.27, stdev=2377.85, samples=11 00:43:47.634 lat (usec) : 750=0.01%, 1000=0.16% 00:43:47.634 lat (msec) : 2=17.53%, 4=3.32%, 10=16.63%, 20=54.12%, 50=8.23% 00:43:47.634 lat (msec) : 100=0.02% 00:43:47.634 cpu : usr=99.06%, sys=0.25%, ctx=21, majf=0, minf=5565 00:43:47.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:43:47.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:47.634 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:47.634 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:47.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:47.634 00:43:47.634 Run status group 0 (all jobs): 00:43:47.634 READ: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=255MiB (267MB), run=8612-8612msec 00:43:47.634 WRITE: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=256MiB (268MB), run=5390-5390msec 00:43:48.571 ----------------------------------------------------- 00:43:48.571 Suppressions used: 00:43:48.571 count bytes template 00:43:48.571 1 5 /usr/src/fio/parse.c 00:43:48.571 2 192 /usr/src/fio/iolog.c 00:43:48.571 1 8 libtcmalloc_minimal.so 00:43:48.571 1 904 libcrypto.so 00:43:48.571 ----------------------------------------------------- 00:43:48.571 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:43:48.571 Remove shared memory files 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58032 /dev/shm/spdk_tgt_trace.pid76535 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:43:48.571 ************************************ 00:43:48.571 END TEST ftl_fio_basic 00:43:48.571 ************************************ 00:43:48.571 00:43:48.571 real 1m13.670s 00:43:48.571 user 2m40.938s 00:43:48.571 sys 0m4.342s 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:48.571 17:40:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:48.831 17:40:26 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:43:48.831 17:40:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:48.831 17:40:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:48.831 17:40:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:43:48.831 ************************************ 00:43:48.831 START TEST ftl_bdevperf 00:43:48.831 ************************************ 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:43:48.831 * Looking for test storage... 00:43:48.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:48.831 17:40:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:48.832 --rc genhtml_branch_coverage=1 00:43:48.832 --rc genhtml_function_coverage=1 00:43:48.832 --rc genhtml_legend=1 00:43:48.832 --rc geninfo_all_blocks=1 00:43:48.832 --rc geninfo_unexecuted_blocks=1 00:43:48.832 00:43:48.832 ' 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:48.832 --rc genhtml_branch_coverage=1 00:43:48.832 --rc genhtml_function_coverage=1 00:43:48.832 --rc genhtml_legend=1 00:43:48.832 --rc geninfo_all_blocks=1 00:43:48.832 --rc geninfo_unexecuted_blocks=1 00:43:48.832 00:43:48.832 ' 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:48.832 --rc genhtml_branch_coverage=1 00:43:48.832 --rc genhtml_function_coverage=1 00:43:48.832 --rc genhtml_legend=1 00:43:48.832 --rc geninfo_all_blocks=1 00:43:48.832 --rc geninfo_unexecuted_blocks=1 00:43:48.832 00:43:48.832 ' 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:48.832 --rc genhtml_branch_coverage=1 00:43:48.832 --rc genhtml_function_coverage=1 00:43:48.832 --rc genhtml_legend=1 00:43:48.832 --rc geninfo_all_blocks=1 00:43:48.832 --rc geninfo_unexecuted_blocks=1 00:43:48.832 00:43:48.832 ' 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:43:48.832 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78561 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78561 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78561 ']' 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:49.092 17:40:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:49.092 [2024-11-26 17:40:26.399024] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:43:49.092 [2024-11-26 17:40:26.399261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78561 ] 00:43:49.351 [2024-11-26 17:40:26.583178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:49.351 [2024-11-26 17:40:26.725029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:43:49.919 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:43:50.179 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:50.438 { 00:43:50.438 "name": "nvme0n1", 00:43:50.438 "aliases": [ 00:43:50.438 "bf306442-b95f-4bb0-b449-16d68ce303f1" 00:43:50.438 ], 00:43:50.438 "product_name": "NVMe disk", 00:43:50.438 "block_size": 4096, 00:43:50.438 "num_blocks": 1310720, 00:43:50.438 "uuid": "bf306442-b95f-4bb0-b449-16d68ce303f1", 00:43:50.438 "numa_id": -1, 00:43:50.438 "assigned_rate_limits": { 00:43:50.438 "rw_ios_per_sec": 0, 00:43:50.438 "rw_mbytes_per_sec": 0, 00:43:50.438 "r_mbytes_per_sec": 0, 00:43:50.438 "w_mbytes_per_sec": 0 00:43:50.438 }, 00:43:50.438 "claimed": true, 00:43:50.438 "claim_type": "read_many_write_one", 00:43:50.438 "zoned": false, 00:43:50.438 "supported_io_types": { 00:43:50.438 "read": true, 00:43:50.438 "write": true, 00:43:50.438 "unmap": true, 00:43:50.438 "flush": true, 00:43:50.438 "reset": true, 00:43:50.438 "nvme_admin": true, 00:43:50.438 "nvme_io": true, 00:43:50.438 "nvme_io_md": false, 00:43:50.438 "write_zeroes": true, 00:43:50.438 "zcopy": false, 00:43:50.438 "get_zone_info": false, 00:43:50.438 "zone_management": false, 00:43:50.438 "zone_append": false, 00:43:50.438 "compare": true, 00:43:50.438 "compare_and_write": false, 00:43:50.438 "abort": true, 00:43:50.438 "seek_hole": false, 00:43:50.438 "seek_data": false, 00:43:50.438 "copy": true, 00:43:50.438 "nvme_iov_md": false 00:43:50.438 }, 00:43:50.438 "driver_specific": { 00:43:50.438 "nvme": [ 00:43:50.438 { 00:43:50.438 "pci_address": "0000:00:11.0", 00:43:50.438 "trid": { 00:43:50.438 "trtype": "PCIe", 00:43:50.438 "traddr": "0000:00:11.0" 00:43:50.438 }, 00:43:50.438 "ctrlr_data": { 00:43:50.438 "cntlid": 0, 00:43:50.438 "vendor_id": "0x1b36", 00:43:50.438 "model_number": "QEMU NVMe Ctrl", 00:43:50.438 "serial_number": "12341", 00:43:50.438 "firmware_revision": "8.0.0", 00:43:50.438 "subnqn": "nqn.2019-08.org.qemu:12341", 00:43:50.438 "oacs": { 00:43:50.438 "security": 0, 00:43:50.438 "format": 1, 00:43:50.438 "firmware": 0, 00:43:50.438 "ns_manage": 1 00:43:50.438 }, 00:43:50.438 "multi_ctrlr": false, 00:43:50.438 "ana_reporting": false 00:43:50.438 }, 00:43:50.438 "vs": { 00:43:50.438 "nvme_version": "1.4" 00:43:50.438 }, 00:43:50.438 "ns_data": { 00:43:50.438 "id": 1, 00:43:50.438 "can_share": false 00:43:50.438 } 00:43:50.438 } 00:43:50.438 ], 00:43:50.438 "mp_policy": "active_passive" 00:43:50.438 } 00:43:50.438 } 00:43:50.438 ]' 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:43:50.438 17:40:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:50.697 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=66f66f8b-db01-4eaa-b4ed-345165ab63d9 00:43:50.697 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:43:50.697 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66f66f8b-db01-4eaa-b4ed-345165ab63d9 00:43:50.956 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:43:51.215 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ea44e3d4-880d-4df5-b0b5-7658c4be4b52 00:43:51.215 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ea44e3d4-880d-4df5-b0b5-7658c4be4b52 00:43:51.472 17:40:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:51.472 17:40:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:51.472 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:43:51.472 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:43:51.472 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:43:51.473 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:51.731 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:51.731 { 00:43:51.731 "name": "3a5d4f84-30c3-4126-aa36-d58266875af7", 00:43:51.731 "aliases": [ 00:43:51.731 "lvs/nvme0n1p0" 00:43:51.731 ], 00:43:51.731 "product_name": "Logical Volume", 00:43:51.731 "block_size": 4096, 00:43:51.731 "num_blocks": 26476544, 00:43:51.731 "uuid": "3a5d4f84-30c3-4126-aa36-d58266875af7", 00:43:51.731 "assigned_rate_limits": { 00:43:51.731 "rw_ios_per_sec": 0, 00:43:51.731 "rw_mbytes_per_sec": 0, 00:43:51.731 "r_mbytes_per_sec": 0, 00:43:51.731 "w_mbytes_per_sec": 0 00:43:51.731 }, 00:43:51.731 "claimed": false, 00:43:51.731 "zoned": false, 00:43:51.731 "supported_io_types": { 00:43:51.731 "read": true, 00:43:51.731 "write": true, 00:43:51.731 "unmap": true, 00:43:51.731 "flush": false, 00:43:51.731 "reset": true, 00:43:51.731 "nvme_admin": false, 00:43:51.731 "nvme_io": false, 00:43:51.731 "nvme_io_md": false, 00:43:51.731 "write_zeroes": true, 00:43:51.731 "zcopy": false, 00:43:51.731 "get_zone_info": false, 00:43:51.731 "zone_management": false, 00:43:51.731 "zone_append": false, 00:43:51.731 "compare": false, 00:43:51.731 "compare_and_write": false, 00:43:51.731 "abort": false, 00:43:51.731 "seek_hole": true, 00:43:51.731 "seek_data": true, 00:43:51.731 "copy": false, 00:43:51.731 "nvme_iov_md": false 00:43:51.731 }, 00:43:51.731 "driver_specific": { 00:43:51.731 "lvol": { 00:43:51.731 "lvol_store_uuid": "ea44e3d4-880d-4df5-b0b5-7658c4be4b52", 00:43:51.731 "base_bdev": "nvme0n1", 00:43:51.731 "thin_provision": true, 00:43:51.731 "num_allocated_clusters": 0, 00:43:51.731 "snapshot": false, 00:43:51.731 "clone": false, 00:43:51.731 "esnap_clone": false 00:43:51.731 } 00:43:51.731 } 00:43:51.731 } 00:43:51.731 ]' 00:43:51.731 17:40:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:43:51.731 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:43:52.020 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:52.278 { 00:43:52.278 "name": "3a5d4f84-30c3-4126-aa36-d58266875af7", 00:43:52.278 "aliases": [ 00:43:52.278 "lvs/nvme0n1p0" 00:43:52.278 ], 00:43:52.278 "product_name": "Logical Volume", 00:43:52.278 "block_size": 4096, 00:43:52.278 "num_blocks": 26476544, 00:43:52.278 "uuid": "3a5d4f84-30c3-4126-aa36-d58266875af7", 00:43:52.278 "assigned_rate_limits": { 00:43:52.278 "rw_ios_per_sec": 0, 00:43:52.278 "rw_mbytes_per_sec": 0, 00:43:52.278 "r_mbytes_per_sec": 0, 00:43:52.278 "w_mbytes_per_sec": 0 00:43:52.278 }, 00:43:52.278 "claimed": false, 00:43:52.278 "zoned": false, 00:43:52.278 "supported_io_types": { 00:43:52.278 "read": true, 00:43:52.278 "write": true, 00:43:52.278 "unmap": true, 00:43:52.278 "flush": false, 00:43:52.278 "reset": true, 00:43:52.278 "nvme_admin": false, 00:43:52.278 "nvme_io": false, 00:43:52.278 "nvme_io_md": false, 00:43:52.278 "write_zeroes": true, 00:43:52.278 "zcopy": false, 00:43:52.278 "get_zone_info": false, 00:43:52.278 "zone_management": false, 00:43:52.278 "zone_append": false, 00:43:52.278 "compare": false, 00:43:52.278 "compare_and_write": false, 00:43:52.278 "abort": false, 00:43:52.278 "seek_hole": true, 00:43:52.278 "seek_data": true, 00:43:52.278 "copy": false, 00:43:52.278 "nvme_iov_md": false 00:43:52.278 }, 00:43:52.278 "driver_specific": { 00:43:52.278 "lvol": { 00:43:52.278 "lvol_store_uuid": "ea44e3d4-880d-4df5-b0b5-7658c4be4b52", 00:43:52.278 "base_bdev": "nvme0n1", 00:43:52.278 "thin_provision": true, 00:43:52.278 "num_allocated_clusters": 0, 00:43:52.278 "snapshot": false, 00:43:52.278 "clone": false, 00:43:52.278 "esnap_clone": false 00:43:52.278 } 00:43:52.278 } 00:43:52.278 } 00:43:52.278 ]' 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:43:52.278 17:40:29 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:43:52.537 17:40:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:43:52.537 17:40:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:52.537 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:52.537 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:52.538 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:43:52.538 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:43:52.538 17:40:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a5d4f84-30c3-4126-aa36-d58266875af7 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:52.797 { 00:43:52.797 "name": "3a5d4f84-30c3-4126-aa36-d58266875af7", 00:43:52.797 "aliases": [ 00:43:52.797 "lvs/nvme0n1p0" 00:43:52.797 ], 00:43:52.797 "product_name": "Logical Volume", 00:43:52.797 "block_size": 4096, 00:43:52.797 "num_blocks": 26476544, 00:43:52.797 "uuid": "3a5d4f84-30c3-4126-aa36-d58266875af7", 00:43:52.797 "assigned_rate_limits": { 00:43:52.797 "rw_ios_per_sec": 0, 00:43:52.797 "rw_mbytes_per_sec": 0, 00:43:52.797 "r_mbytes_per_sec": 0, 00:43:52.797 "w_mbytes_per_sec": 0 00:43:52.797 }, 00:43:52.797 "claimed": false, 00:43:52.797 "zoned": false, 00:43:52.797 "supported_io_types": { 00:43:52.797 "read": true, 00:43:52.797 "write": true, 00:43:52.797 "unmap": true, 00:43:52.797 "flush": false, 00:43:52.797 "reset": true, 00:43:52.797 "nvme_admin": false, 00:43:52.797 "nvme_io": false, 00:43:52.797 "nvme_io_md": false, 00:43:52.797 "write_zeroes": true, 00:43:52.797 "zcopy": false, 00:43:52.797 "get_zone_info": false, 00:43:52.797 "zone_management": false, 00:43:52.797 "zone_append": false, 00:43:52.797 "compare": false, 00:43:52.797 "compare_and_write": false, 00:43:52.797 "abort": false, 00:43:52.797 "seek_hole": true, 00:43:52.797 "seek_data": true, 00:43:52.797 "copy": false, 00:43:52.797 "nvme_iov_md": false 00:43:52.797 }, 00:43:52.797 "driver_specific": { 00:43:52.797 "lvol": { 00:43:52.797 "lvol_store_uuid": "ea44e3d4-880d-4df5-b0b5-7658c4be4b52", 00:43:52.797 "base_bdev": "nvme0n1", 00:43:52.797 "thin_provision": true, 00:43:52.797 "num_allocated_clusters": 0, 00:43:52.797 "snapshot": false, 00:43:52.797 "clone": false, 00:43:52.797 "esnap_clone": false 00:43:52.797 } 00:43:52.797 } 00:43:52.797 } 00:43:52.797 ]' 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:43:52.797 17:40:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3a5d4f84-30c3-4126-aa36-d58266875af7 -c nvc0n1p0 --l2p_dram_limit 20 00:43:53.057 [2024-11-26 17:40:30.338290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.057 [2024-11-26 17:40:30.338363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:53.057 [2024-11-26 17:40:30.338379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:43:53.057 [2024-11-26 17:40:30.338390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.057 [2024-11-26 17:40:30.338469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.057 [2024-11-26 17:40:30.338481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:53.057 [2024-11-26 17:40:30.338490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:43:53.057 [2024-11-26 17:40:30.338501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.057 [2024-11-26 17:40:30.338519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:53.057 [2024-11-26 17:40:30.339636] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:53.057 [2024-11-26 17:40:30.339663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.057 [2024-11-26 17:40:30.339675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:53.057 [2024-11-26 17:40:30.339684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:43:53.057 [2024-11-26 17:40:30.339696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.057 [2024-11-26 17:40:30.339773] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID baa272b7-394c-459c-af92-427aec444244 00:43:53.058 [2024-11-26 17:40:30.342264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.342296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:43:53.058 [2024-11-26 17:40:30.342315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:43:53.058 [2024-11-26 17:40:30.342323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.356583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.356644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:53.058 [2024-11-26 17:40:30.356660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.222 ms 00:43:53.058 [2024-11-26 17:40:30.356672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.356799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.356815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:53.058 [2024-11-26 17:40:30.356832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:43:53.058 [2024-11-26 17:40:30.356840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.356907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.356916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:53.058 [2024-11-26 17:40:30.356927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:43:53.058 [2024-11-26 17:40:30.356935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.356965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:53.058 [2024-11-26 17:40:30.363022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.363055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:53.058 [2024-11-26 17:40:30.363064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:43:53.058 [2024-11-26 17:40:30.363079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.363111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.363122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:53.058 [2024-11-26 17:40:30.363130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:43:53.058 [2024-11-26 17:40:30.363140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.363168] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:43:53.058 [2024-11-26 17:40:30.363303] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:53.058 [2024-11-26 17:40:30.363315] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:53.058 [2024-11-26 17:40:30.363327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:53.058 [2024-11-26 17:40:30.363353] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:53.058 [2024-11-26 17:40:30.363383] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:53.058 [2024-11-26 17:40:30.363391] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:53.058 [2024-11-26 17:40:30.363401] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:53.058 [2024-11-26 17:40:30.363412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.363422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:53.058 [2024-11-26 17:40:30.363430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:43:53.058 [2024-11-26 17:40:30.363440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.363508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.058 [2024-11-26 17:40:30.363519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:53.058 [2024-11-26 17:40:30.363527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:43:53.058 [2024-11-26 17:40:30.363540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.058 [2024-11-26 17:40:30.363620] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:53.058 [2024-11-26 17:40:30.363647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:53.058 [2024-11-26 17:40:30.363655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:53.058 [2024-11-26 17:40:30.363685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:53.058 [2024-11-26 17:40:30.363707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:53.058 [2024-11-26 17:40:30.363724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:53.058 [2024-11-26 17:40:30.363748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:53.058 [2024-11-26 17:40:30.363755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:53.058 [2024-11-26 17:40:30.363764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:53.058 [2024-11-26 17:40:30.363771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:53.058 [2024-11-26 17:40:30.363783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:53.058 [2024-11-26 17:40:30.363798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:53.058 [2024-11-26 17:40:30.363824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:53.058 [2024-11-26 17:40:30.363848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:53.058 [2024-11-26 17:40:30.363870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:53.058 [2024-11-26 17:40:30.363895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:53.058 [2024-11-26 17:40:30.363912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:53.058 [2024-11-26 17:40:30.363919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:53.058 [2024-11-26 17:40:30.363933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:53.058 [2024-11-26 17:40:30.363941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:53.058 [2024-11-26 17:40:30.363948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:53.058 [2024-11-26 17:40:30.363957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:53.058 [2024-11-26 17:40:30.363964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:53.058 [2024-11-26 17:40:30.363973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.363979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:53.058 [2024-11-26 17:40:30.363987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:53.058 [2024-11-26 17:40:30.363995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.364003] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:53.058 [2024-11-26 17:40:30.364011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:53.058 [2024-11-26 17:40:30.364021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:53.058 [2024-11-26 17:40:30.364028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:53.058 [2024-11-26 17:40:30.364042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:53.058 [2024-11-26 17:40:30.364049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:53.058 [2024-11-26 17:40:30.364058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:53.058 [2024-11-26 17:40:30.364065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:53.058 [2024-11-26 17:40:30.364074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:53.058 [2024-11-26 17:40:30.364081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:53.058 [2024-11-26 17:40:30.364095] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:53.058 [2024-11-26 17:40:30.364104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:53.058 [2024-11-26 17:40:30.364115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:53.058 [2024-11-26 17:40:30.364122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:53.058 [2024-11-26 17:40:30.364132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:53.058 [2024-11-26 17:40:30.364138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:53.059 [2024-11-26 17:40:30.364148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:53.059 [2024-11-26 17:40:30.364155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:53.059 [2024-11-26 17:40:30.364165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:53.059 [2024-11-26 17:40:30.364172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:53.059 [2024-11-26 17:40:30.364184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:53.059 [2024-11-26 17:40:30.364192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:53.059 [2024-11-26 17:40:30.364201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:53.059 [2024-11-26 17:40:30.364207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:53.059 [2024-11-26 17:40:30.364216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:53.059 [2024-11-26 17:40:30.364223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:53.059 [2024-11-26 17:40:30.364232] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:53.059 [2024-11-26 17:40:30.364239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:53.059 [2024-11-26 17:40:30.364254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:53.059 [2024-11-26 17:40:30.364262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:53.059 [2024-11-26 17:40:30.364271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:53.059 [2024-11-26 17:40:30.364279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:53.059 [2024-11-26 17:40:30.364289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:53.059 [2024-11-26 17:40:30.364297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:53.059 [2024-11-26 17:40:30.364307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:43:53.059 [2024-11-26 17:40:30.364315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:53.059 [2024-11-26 17:40:30.364358] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:43:53.059 [2024-11-26 17:40:30.364368] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:43:57.253 [2024-11-26 17:40:34.127700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.127788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:43:57.253 [2024-11-26 17:40:34.127807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3770.597 ms 00:43:57.253 [2024-11-26 17:40:34.127817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.175152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.175220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:57.253 [2024-11-26 17:40:34.175239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.086 ms 00:43:57.253 [2024-11-26 17:40:34.175248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.175424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.175435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:57.253 [2024-11-26 17:40:34.175450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:43:57.253 [2024-11-26 17:40:34.175459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.237283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.237487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:57.253 [2024-11-26 17:40:34.237508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.906 ms 00:43:57.253 [2024-11-26 17:40:34.237518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.237578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.237587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:57.253 [2024-11-26 17:40:34.237600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:57.253 [2024-11-26 17:40:34.237624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.238503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.238517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:57.253 [2024-11-26 17:40:34.238530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:43:57.253 [2024-11-26 17:40:34.238540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.238681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.238696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:57.253 [2024-11-26 17:40:34.238711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:43:57.253 [2024-11-26 17:40:34.238720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.261409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.261453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:57.253 [2024-11-26 17:40:34.261468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.704 ms 00:43:57.253 [2024-11-26 17:40:34.261508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.275987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:43:57.253 [2024-11-26 17:40:34.285686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.285728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:57.253 [2024-11-26 17:40:34.285742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.117 ms 00:43:57.253 [2024-11-26 17:40:34.285753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.382840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.383029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:43:57.253 [2024-11-26 17:40:34.383065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.226 ms 00:43:57.253 [2024-11-26 17:40:34.383076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.383285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.383303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:57.253 [2024-11-26 17:40:34.383312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:43:57.253 [2024-11-26 17:40:34.383327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.418723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.418766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:43:57.253 [2024-11-26 17:40:34.418778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.419 ms 00:43:57.253 [2024-11-26 17:40:34.418789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.452891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.452929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:43:57.253 [2024-11-26 17:40:34.452940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.133 ms 00:43:57.253 [2024-11-26 17:40:34.452966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.453751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.453771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:57.253 [2024-11-26 17:40:34.453780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:43:57.253 [2024-11-26 17:40:34.453790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.552803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.552875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:43:57.253 [2024-11-26 17:40:34.552890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.158 ms 00:43:57.253 [2024-11-26 17:40:34.552902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.591343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.591466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:43:57.253 [2024-11-26 17:40:34.591487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.441 ms 00:43:57.253 [2024-11-26 17:40:34.591499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.628166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.628213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:43:57.253 [2024-11-26 17:40:34.628224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.698 ms 00:43:57.253 [2024-11-26 17:40:34.628250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.665341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.665387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:57.253 [2024-11-26 17:40:34.665400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.119 ms 00:43:57.253 [2024-11-26 17:40:34.665411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.665455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.665472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:57.253 [2024-11-26 17:40:34.665481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:43:57.253 [2024-11-26 17:40:34.665491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.665594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:57.253 [2024-11-26 17:40:34.665624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:57.253 [2024-11-26 17:40:34.665634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:43:57.253 [2024-11-26 17:40:34.665644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:57.253 [2024-11-26 17:40:34.667104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4336.602 ms, result 0 00:43:57.253 { 00:43:57.253 "name": "ftl0", 00:43:57.253 "uuid": "baa272b7-394c-459c-af92-427aec444244" 00:43:57.253 } 00:43:57.513 17:40:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:43:57.513 17:40:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:43:57.513 17:40:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:43:57.513 17:40:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:43:57.773 [2024-11-26 17:40:35.010828] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:43:57.773 I/O size of 69632 is greater than zero copy threshold (65536). 00:43:57.773 Zero copy mechanism will not be used. 00:43:57.773 Running I/O for 4 seconds... 00:43:59.727 1671.00 IOPS, 110.96 MiB/s [2024-11-26T17:40:38.112Z] 1697.00 IOPS, 112.69 MiB/s [2024-11-26T17:40:39.051Z] 1736.67 IOPS, 115.33 MiB/s [2024-11-26T17:40:39.051Z] 1754.50 IOPS, 116.51 MiB/s 00:44:01.605 Latency(us) 00:44:01.605 [2024-11-26T17:40:39.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:01.605 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:44:01.605 ftl0 : 4.00 1754.07 116.48 0.00 0.00 599.08 232.52 2160.68 00:44:01.605 [2024-11-26T17:40:39.051Z] =================================================================================================================== 00:44:01.605 [2024-11-26T17:40:39.051Z] Total : 1754.07 116.48 0.00 0.00 599.08 232.52 2160.68 00:44:01.605 [2024-11-26 17:40:39.015354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:44:01.605 { 00:44:01.605 "results": [ 00:44:01.605 { 00:44:01.605 "job": "ftl0", 00:44:01.605 "core_mask": "0x1", 00:44:01.605 "workload": "randwrite", 00:44:01.605 "status": "finished", 00:44:01.605 "queue_depth": 1, 00:44:01.605 "io_size": 69632, 00:44:01.605 "runtime": 4.001561, 00:44:01.605 "iops": 1754.0654759480112, 00:44:01.605 "mibps": 116.48091051217261, 00:44:01.605 "io_failed": 0, 00:44:01.605 "io_timeout": 0, 00:44:01.605 "avg_latency_us": 599.0783932072087, 00:44:01.605 "min_latency_us": 232.5240174672489, 00:44:01.605 "max_latency_us": 2160.6847161572055 00:44:01.605 } 00:44:01.605 ], 00:44:01.605 "core_count": 1 00:44:01.605 } 00:44:01.865 17:40:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:44:01.865 [2024-11-26 17:40:39.157250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:44:01.865 Running I/O for 4 seconds... 00:44:03.738 10243.00 IOPS, 40.01 MiB/s [2024-11-26T17:40:42.588Z] 10051.50 IOPS, 39.26 MiB/s [2024-11-26T17:40:43.522Z] 9948.00 IOPS, 38.86 MiB/s [2024-11-26T17:40:43.522Z] 9950.25 IOPS, 38.87 MiB/s 00:44:06.076 Latency(us) 00:44:06.076 [2024-11-26T17:40:43.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:06.076 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:44:06.076 ftl0 : 4.02 9941.02 38.83 0.00 0.00 12847.98 291.55 24039.41 00:44:06.076 [2024-11-26T17:40:43.522Z] =================================================================================================================== 00:44:06.076 [2024-11-26T17:40:43.522Z] Total : 9941.02 38.83 0.00 0.00 12847.98 0.00 24039.41 00:44:06.076 { 00:44:06.076 "results": [ 00:44:06.076 { 00:44:06.076 "job": "ftl0", 00:44:06.076 "core_mask": "0x1", 00:44:06.076 "workload": "randwrite", 00:44:06.076 "status": "finished", 00:44:06.076 "queue_depth": 128, 00:44:06.076 "io_size": 4096, 00:44:06.076 "runtime": 4.016288, 00:44:06.076 "iops": 9941.020165884518, 00:44:06.076 "mibps": 38.8321100229864, 00:44:06.076 "io_failed": 0, 00:44:06.076 "io_timeout": 0, 00:44:06.076 "avg_latency_us": 12847.975040637406, 00:44:06.076 "min_latency_us": 291.54934497816595, 00:44:06.076 "max_latency_us": 24039.40611353712 00:44:06.076 } 00:44:06.076 ], 00:44:06.076 "core_count": 1 00:44:06.076 } 00:44:06.076 [2024-11-26 17:40:43.177325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:44:06.076 17:40:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:44:06.076 [2024-11-26 17:40:43.295879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:44:06.076 Running I/O for 4 seconds... 00:44:07.948 8032.00 IOPS, 31.38 MiB/s [2024-11-26T17:40:46.341Z] 8002.00 IOPS, 31.26 MiB/s [2024-11-26T17:40:47.719Z] 7954.00 IOPS, 31.07 MiB/s [2024-11-26T17:40:47.719Z] 7955.25 IOPS, 31.08 MiB/s 00:44:10.273 Latency(us) 00:44:10.273 [2024-11-26T17:40:47.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:10.273 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:10.273 Verification LBA range: start 0x0 length 0x1400000 00:44:10.273 ftl0 : 4.01 7967.44 31.12 0.00 0.00 16015.18 291.55 17972.32 00:44:10.273 [2024-11-26T17:40:47.719Z] =================================================================================================================== 00:44:10.273 [2024-11-26T17:40:47.719Z] Total : 7967.44 31.12 0.00 0.00 16015.18 0.00 17972.32 00:44:10.273 [2024-11-26 17:40:47.319232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:44:10.273 "results": [ 00:44:10.273 { 00:44:10.273 "job": "ftl0", 00:44:10.273 "core_mask": "0x1", 00:44:10.273 "workload": "verify", 00:44:10.273 "status": "finished", 00:44:10.273 "verify_range": { 00:44:10.273 "start": 0, 00:44:10.273 "length": 20971520 00:44:10.273 }, 00:44:10.273 "queue_depth": 128, 00:44:10.273 "io_size": 4096, 00:44:10.273 "runtime": 4.00982, 00:44:10.273 "iops": 7967.439934959674, 00:44:10.273 "mibps": 31.122812245936228, 00:44:10.273 "io_failed": 0, 00:44:10.273 "io_timeout": 0, 00:44:10.273 "avg_latency_us": 16015.180945018186, 00:44:10.273 "min_latency_us": 291.54934497816595, 00:44:10.273 "max_latency_us": 17972.317903930132 00:44:10.273 } 00:44:10.273 ], 00:44:10.273 "core_count": 1 00:44:10.273 } 00:44:10.273 l0 00:44:10.273 17:40:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:44:10.273 [2024-11-26 17:40:47.539285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.273 [2024-11-26 17:40:47.539354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:10.273 [2024-11-26 17:40:47.539385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:10.273 [2024-11-26 17:40:47.539395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.273 [2024-11-26 17:40:47.539419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:10.273 [2024-11-26 17:40:47.544087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.273 [2024-11-26 17:40:47.544126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:10.273 [2024-11-26 17:40:47.544140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:44:10.273 [2024-11-26 17:40:47.544148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.273 [2024-11-26 17:40:47.546289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.274 [2024-11-26 17:40:47.546366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:10.274 [2024-11-26 17:40:47.546429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:44:10.274 [2024-11-26 17:40:47.546455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.536 [2024-11-26 17:40:47.768117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.536 [2024-11-26 17:40:47.768321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:10.536 [2024-11-26 17:40:47.768377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 222.004 ms 00:44:10.536 [2024-11-26 17:40:47.768405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.536 [2024-11-26 17:40:47.773766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.536 [2024-11-26 17:40:47.773834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:10.536 [2024-11-26 17:40:47.773869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:44:10.537 [2024-11-26 17:40:47.773901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.811066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.811161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:10.537 [2024-11-26 17:40:47.811197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.131 ms 00:44:10.537 [2024-11-26 17:40:47.811207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.832579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.832628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:10.537 [2024-11-26 17:40:47.832644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.355 ms 00:44:10.537 [2024-11-26 17:40:47.832653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.832809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.832821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:10.537 [2024-11-26 17:40:47.832836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:44:10.537 [2024-11-26 17:40:47.832846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.867942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.867975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:10.537 [2024-11-26 17:40:47.867989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.147 ms 00:44:10.537 [2024-11-26 17:40:47.867996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.902543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.902577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:10.537 [2024-11-26 17:40:47.902590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.575 ms 00:44:10.537 [2024-11-26 17:40:47.902597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.936698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.936770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:10.537 [2024-11-26 17:40:47.936787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.104 ms 00:44:10.537 [2024-11-26 17:40:47.936810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.971922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.537 [2024-11-26 17:40:47.971955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:10.537 [2024-11-26 17:40:47.971972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.074 ms 00:44:10.537 [2024-11-26 17:40:47.971980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.537 [2024-11-26 17:40:47.972015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:10.537 [2024-11-26 17:40:47.972030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:10.537 [2024-11-26 17:40:47.972442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.972990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.973000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.973009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.973018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:10.538 [2024-11-26 17:40:47.973033] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:10.538 [2024-11-26 17:40:47.973043] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: baa272b7-394c-459c-af92-427aec444244 00:44:10.538 [2024-11-26 17:40:47.973055] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:10.538 [2024-11-26 17:40:47.973065] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:10.538 [2024-11-26 17:40:47.973072] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:10.538 [2024-11-26 17:40:47.973082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:10.538 [2024-11-26 17:40:47.973089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:10.538 [2024-11-26 17:40:47.973099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:10.538 [2024-11-26 17:40:47.973107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:10.538 [2024-11-26 17:40:47.973118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:10.538 [2024-11-26 17:40:47.973125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:10.538 [2024-11-26 17:40:47.973134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.538 [2024-11-26 17:40:47.973142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:10.538 [2024-11-26 17:40:47.973202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:44:10.538 [2024-11-26 17:40:47.973211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:47.996082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.804 [2024-11-26 17:40:47.996121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:10.804 [2024-11-26 17:40:47.996137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.869 ms 00:44:10.804 [2024-11-26 17:40:47.996145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:47.996790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.804 [2024-11-26 17:40:47.996802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:10.804 [2024-11-26 17:40:47.996813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:44:10.804 [2024-11-26 17:40:47.996821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:48.055601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:10.804 [2024-11-26 17:40:48.055650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:10.804 [2024-11-26 17:40:48.055667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:10.804 [2024-11-26 17:40:48.055676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:48.055749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:10.804 [2024-11-26 17:40:48.055758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:10.804 [2024-11-26 17:40:48.055770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:10.804 [2024-11-26 17:40:48.055777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:48.055903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:10.804 [2024-11-26 17:40:48.055916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:10.804 [2024-11-26 17:40:48.055927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:10.804 [2024-11-26 17:40:48.055934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:48.055954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:10.804 [2024-11-26 17:40:48.055963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:10.804 [2024-11-26 17:40:48.055972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:10.804 [2024-11-26 17:40:48.055980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.804 [2024-11-26 17:40:48.196337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:10.804 [2024-11-26 17:40:48.196406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:10.804 [2024-11-26 17:40:48.196427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:10.804 [2024-11-26 17:40:48.196436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.063 [2024-11-26 17:40:48.304707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.063 [2024-11-26 17:40:48.304846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:11.063 [2024-11-26 17:40:48.304882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.063 [2024-11-26 17:40:48.304891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.063 [2024-11-26 17:40:48.305036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.063 [2024-11-26 17:40:48.305046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:11.063 [2024-11-26 17:40:48.305057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.063 [2024-11-26 17:40:48.305065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.063 [2024-11-26 17:40:48.305118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.063 [2024-11-26 17:40:48.305128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:11.063 [2024-11-26 17:40:48.305138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.063 [2024-11-26 17:40:48.305146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.063 [2024-11-26 17:40:48.305269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.063 [2024-11-26 17:40:48.305284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:11.063 [2024-11-26 17:40:48.305298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.063 [2024-11-26 17:40:48.305306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.064 [2024-11-26 17:40:48.305354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.064 [2024-11-26 17:40:48.305365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:11.064 [2024-11-26 17:40:48.305376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.064 [2024-11-26 17:40:48.305384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.064 [2024-11-26 17:40:48.305429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.064 [2024-11-26 17:40:48.305441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:11.064 [2024-11-26 17:40:48.305451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.064 [2024-11-26 17:40:48.305470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.064 [2024-11-26 17:40:48.305520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:11.064 [2024-11-26 17:40:48.305530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:11.064 [2024-11-26 17:40:48.305541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:11.064 [2024-11-26 17:40:48.305548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.064 [2024-11-26 17:40:48.305728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 767.858 ms, result 0 00:44:11.064 true 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78561 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78561 ']' 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78561 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78561 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78561' 00:44:11.064 killing process with pid 78561 00:44:11.064 Received shutdown signal, test time was about 4.000000 seconds 00:44:11.064 00:44:11.064 Latency(us) 00:44:11.064 [2024-11-26T17:40:48.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:11.064 [2024-11-26T17:40:48.510Z] =================================================================================================================== 00:44:11.064 [2024-11-26T17:40:48.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78561 00:44:11.064 17:40:48 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78561 00:44:17.636 Remove shared memory files 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:44:17.636 ************************************ 00:44:17.636 END TEST ftl_bdevperf 00:44:17.636 ************************************ 00:44:17.636 00:44:17.636 real 0m27.771s 00:44:17.636 user 0m30.354s 00:44:17.636 sys 0m1.372s 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:17.636 17:40:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:17.636 17:40:53 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:44:17.636 17:40:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:17.636 17:40:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:17.636 17:40:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:44:17.636 ************************************ 00:44:17.636 START TEST ftl_trim 00:44:17.636 ************************************ 00:44:17.636 17:40:53 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:44:17.636 * Looking for test storage... 00:44:17.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:44:17.636 17:40:53 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:17.636 17:40:53 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:44:17.636 17:40:53 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:17.636 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:17.636 17:40:54 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:44:17.636 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:17.636 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.636 --rc genhtml_branch_coverage=1 00:44:17.636 --rc genhtml_function_coverage=1 00:44:17.636 --rc genhtml_legend=1 00:44:17.636 --rc geninfo_all_blocks=1 00:44:17.636 --rc geninfo_unexecuted_blocks=1 00:44:17.636 00:44:17.636 ' 00:44:17.636 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.636 --rc genhtml_branch_coverage=1 00:44:17.636 --rc genhtml_function_coverage=1 00:44:17.636 --rc genhtml_legend=1 00:44:17.636 --rc geninfo_all_blocks=1 00:44:17.636 --rc geninfo_unexecuted_blocks=1 00:44:17.636 00:44:17.636 ' 00:44:17.636 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.636 --rc genhtml_branch_coverage=1 00:44:17.636 --rc genhtml_function_coverage=1 00:44:17.636 --rc genhtml_legend=1 00:44:17.636 --rc geninfo_all_blocks=1 00:44:17.636 --rc geninfo_unexecuted_blocks=1 00:44:17.636 00:44:17.636 ' 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:17.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.637 --rc genhtml_branch_coverage=1 00:44:17.637 --rc genhtml_function_coverage=1 00:44:17.637 --rc genhtml_legend=1 00:44:17.637 --rc geninfo_all_blocks=1 00:44:17.637 --rc geninfo_unexecuted_blocks=1 00:44:17.637 00:44:17.637 ' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78944 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:44:17.637 17:40:54 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78944 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78944 ']' 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:17.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:17.637 17:40:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:44:17.637 [2024-11-26 17:40:54.210959] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:44:17.637 [2024-11-26 17:40:54.211168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78944 ] 00:44:17.637 [2024-11-26 17:40:54.380383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:17.637 [2024-11-26 17:40:54.531988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:17.637 [2024-11-26 17:40:54.532121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:17.637 [2024-11-26 17:40:54.532164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:18.208 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:18.208 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:44:18.208 17:40:55 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:18.208 17:40:55 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:44:18.208 17:40:55 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:18.208 17:40:55 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:44:18.208 17:40:55 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:44:18.208 17:40:55 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:18.468 17:40:55 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:18.468 17:40:55 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:44:18.468 17:40:55 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:18.468 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:44:18.468 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:18.468 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:18.468 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:18.468 17:40:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:18.729 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:18.729 { 00:44:18.729 "name": "nvme0n1", 00:44:18.729 "aliases": [ 00:44:18.729 "e1369b2c-6078-4023-975b-d83ddd51adc9" 00:44:18.729 ], 00:44:18.729 "product_name": "NVMe disk", 00:44:18.729 "block_size": 4096, 00:44:18.729 "num_blocks": 1310720, 00:44:18.729 "uuid": "e1369b2c-6078-4023-975b-d83ddd51adc9", 00:44:18.729 "numa_id": -1, 00:44:18.729 "assigned_rate_limits": { 00:44:18.729 "rw_ios_per_sec": 0, 00:44:18.729 "rw_mbytes_per_sec": 0, 00:44:18.729 "r_mbytes_per_sec": 0, 00:44:18.729 "w_mbytes_per_sec": 0 00:44:18.729 }, 00:44:18.729 "claimed": true, 00:44:18.729 "claim_type": "read_many_write_one", 00:44:18.729 "zoned": false, 00:44:18.729 "supported_io_types": { 00:44:18.729 "read": true, 00:44:18.729 "write": true, 00:44:18.729 "unmap": true, 00:44:18.729 "flush": true, 00:44:18.729 "reset": true, 00:44:18.729 "nvme_admin": true, 00:44:18.729 "nvme_io": true, 00:44:18.729 "nvme_io_md": false, 00:44:18.729 "write_zeroes": true, 00:44:18.729 "zcopy": false, 00:44:18.729 "get_zone_info": false, 00:44:18.729 "zone_management": false, 00:44:18.729 "zone_append": false, 00:44:18.729 "compare": true, 00:44:18.729 "compare_and_write": false, 00:44:18.729 "abort": true, 00:44:18.729 "seek_hole": false, 00:44:18.729 "seek_data": false, 00:44:18.729 "copy": true, 00:44:18.729 "nvme_iov_md": false 00:44:18.729 }, 00:44:18.729 "driver_specific": { 00:44:18.729 "nvme": [ 00:44:18.729 { 00:44:18.729 "pci_address": "0000:00:11.0", 00:44:18.729 "trid": { 00:44:18.729 "trtype": "PCIe", 00:44:18.729 "traddr": "0000:00:11.0" 00:44:18.729 }, 00:44:18.729 "ctrlr_data": { 00:44:18.729 "cntlid": 0, 00:44:18.729 "vendor_id": "0x1b36", 00:44:18.729 "model_number": "QEMU NVMe Ctrl", 00:44:18.729 "serial_number": "12341", 00:44:18.729 "firmware_revision": "8.0.0", 00:44:18.729 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:18.729 "oacs": { 00:44:18.729 "security": 0, 00:44:18.729 "format": 1, 00:44:18.729 "firmware": 0, 00:44:18.729 "ns_manage": 1 00:44:18.729 }, 00:44:18.729 "multi_ctrlr": false, 00:44:18.729 "ana_reporting": false 00:44:18.729 }, 00:44:18.729 "vs": { 00:44:18.729 "nvme_version": "1.4" 00:44:18.729 }, 00:44:18.729 "ns_data": { 00:44:18.729 "id": 1, 00:44:18.729 "can_share": false 00:44:18.729 } 00:44:18.729 } 00:44:18.729 ], 00:44:18.729 "mp_policy": "active_passive" 00:44:18.729 } 00:44:18.729 } 00:44:18.729 ]' 00:44:18.729 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:18.729 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:18.729 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:18.989 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:44:18.989 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:44:18.989 17:40:56 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ea44e3d4-880d-4df5-b0b5-7658c4be4b52 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:44:18.989 17:40:56 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea44e3d4-880d-4df5-b0b5-7658c4be4b52 00:44:19.249 17:40:56 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:19.506 17:40:56 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=31f8e687-f521-41e3-b641-bdd1611361cf 00:44:19.506 17:40:56 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 31f8e687-f521-41e3-b641-bdd1611361cf 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:44:19.763 17:40:57 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:19.763 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:19.763 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:19.763 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:19.763 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:19.763 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:20.022 { 00:44:20.022 "name": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:20.022 "aliases": [ 00:44:20.022 "lvs/nvme0n1p0" 00:44:20.022 ], 00:44:20.022 "product_name": "Logical Volume", 00:44:20.022 "block_size": 4096, 00:44:20.022 "num_blocks": 26476544, 00:44:20.022 "uuid": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:20.022 "assigned_rate_limits": { 00:44:20.022 "rw_ios_per_sec": 0, 00:44:20.022 "rw_mbytes_per_sec": 0, 00:44:20.022 "r_mbytes_per_sec": 0, 00:44:20.022 "w_mbytes_per_sec": 0 00:44:20.022 }, 00:44:20.022 "claimed": false, 00:44:20.022 "zoned": false, 00:44:20.022 "supported_io_types": { 00:44:20.022 "read": true, 00:44:20.022 "write": true, 00:44:20.022 "unmap": true, 00:44:20.022 "flush": false, 00:44:20.022 "reset": true, 00:44:20.022 "nvme_admin": false, 00:44:20.022 "nvme_io": false, 00:44:20.022 "nvme_io_md": false, 00:44:20.022 "write_zeroes": true, 00:44:20.022 "zcopy": false, 00:44:20.022 "get_zone_info": false, 00:44:20.022 "zone_management": false, 00:44:20.022 "zone_append": false, 00:44:20.022 "compare": false, 00:44:20.022 "compare_and_write": false, 00:44:20.022 "abort": false, 00:44:20.022 "seek_hole": true, 00:44:20.022 "seek_data": true, 00:44:20.022 "copy": false, 00:44:20.022 "nvme_iov_md": false 00:44:20.022 }, 00:44:20.022 "driver_specific": { 00:44:20.022 "lvol": { 00:44:20.022 "lvol_store_uuid": "31f8e687-f521-41e3-b641-bdd1611361cf", 00:44:20.022 "base_bdev": "nvme0n1", 00:44:20.022 "thin_provision": true, 00:44:20.022 "num_allocated_clusters": 0, 00:44:20.022 "snapshot": false, 00:44:20.022 "clone": false, 00:44:20.022 "esnap_clone": false 00:44:20.022 } 00:44:20.022 } 00:44:20.022 } 00:44:20.022 ]' 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:20.022 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:44:20.022 17:40:57 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:44:20.022 17:40:57 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:44:20.022 17:40:57 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:20.280 17:40:57 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:20.280 17:40:57 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:20.280 17:40:57 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:20.280 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:20.280 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:20.280 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:20.280 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:20.280 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:20.539 { 00:44:20.539 "name": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:20.539 "aliases": [ 00:44:20.539 "lvs/nvme0n1p0" 00:44:20.539 ], 00:44:20.539 "product_name": "Logical Volume", 00:44:20.539 "block_size": 4096, 00:44:20.539 "num_blocks": 26476544, 00:44:20.539 "uuid": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:20.539 "assigned_rate_limits": { 00:44:20.539 "rw_ios_per_sec": 0, 00:44:20.539 "rw_mbytes_per_sec": 0, 00:44:20.539 "r_mbytes_per_sec": 0, 00:44:20.539 "w_mbytes_per_sec": 0 00:44:20.539 }, 00:44:20.539 "claimed": false, 00:44:20.539 "zoned": false, 00:44:20.539 "supported_io_types": { 00:44:20.539 "read": true, 00:44:20.539 "write": true, 00:44:20.539 "unmap": true, 00:44:20.539 "flush": false, 00:44:20.539 "reset": true, 00:44:20.539 "nvme_admin": false, 00:44:20.539 "nvme_io": false, 00:44:20.539 "nvme_io_md": false, 00:44:20.539 "write_zeroes": true, 00:44:20.539 "zcopy": false, 00:44:20.539 "get_zone_info": false, 00:44:20.539 "zone_management": false, 00:44:20.539 "zone_append": false, 00:44:20.539 "compare": false, 00:44:20.539 "compare_and_write": false, 00:44:20.539 "abort": false, 00:44:20.539 "seek_hole": true, 00:44:20.539 "seek_data": true, 00:44:20.539 "copy": false, 00:44:20.539 "nvme_iov_md": false 00:44:20.539 }, 00:44:20.539 "driver_specific": { 00:44:20.539 "lvol": { 00:44:20.539 "lvol_store_uuid": "31f8e687-f521-41e3-b641-bdd1611361cf", 00:44:20.539 "base_bdev": "nvme0n1", 00:44:20.539 "thin_provision": true, 00:44:20.539 "num_allocated_clusters": 0, 00:44:20.539 "snapshot": false, 00:44:20.539 "clone": false, 00:44:20.539 "esnap_clone": false 00:44:20.539 } 00:44:20.539 } 00:44:20.539 } 00:44:20.539 ]' 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:20.539 17:40:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:44:20.539 17:40:57 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:44:20.539 17:40:57 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:20.798 17:40:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:44:20.798 17:40:58 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:44:20.798 17:40:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:20.798 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:20.798 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:20.798 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:20.798 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:20.798 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf58a87f-1844-491a-851f-57a6d1e6b427 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:21.058 { 00:44:21.058 "name": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:21.058 "aliases": [ 00:44:21.058 "lvs/nvme0n1p0" 00:44:21.058 ], 00:44:21.058 "product_name": "Logical Volume", 00:44:21.058 "block_size": 4096, 00:44:21.058 "num_blocks": 26476544, 00:44:21.058 "uuid": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:21.058 "assigned_rate_limits": { 00:44:21.058 "rw_ios_per_sec": 0, 00:44:21.058 "rw_mbytes_per_sec": 0, 00:44:21.058 "r_mbytes_per_sec": 0, 00:44:21.058 "w_mbytes_per_sec": 0 00:44:21.058 }, 00:44:21.058 "claimed": false, 00:44:21.058 "zoned": false, 00:44:21.058 "supported_io_types": { 00:44:21.058 "read": true, 00:44:21.058 "write": true, 00:44:21.058 "unmap": true, 00:44:21.058 "flush": false, 00:44:21.058 "reset": true, 00:44:21.058 "nvme_admin": false, 00:44:21.058 "nvme_io": false, 00:44:21.058 "nvme_io_md": false, 00:44:21.058 "write_zeroes": true, 00:44:21.058 "zcopy": false, 00:44:21.058 "get_zone_info": false, 00:44:21.058 "zone_management": false, 00:44:21.058 "zone_append": false, 00:44:21.058 "compare": false, 00:44:21.058 "compare_and_write": false, 00:44:21.058 "abort": false, 00:44:21.058 "seek_hole": true, 00:44:21.058 "seek_data": true, 00:44:21.058 "copy": false, 00:44:21.058 "nvme_iov_md": false 00:44:21.058 }, 00:44:21.058 "driver_specific": { 00:44:21.058 "lvol": { 00:44:21.058 "lvol_store_uuid": "31f8e687-f521-41e3-b641-bdd1611361cf", 00:44:21.058 "base_bdev": "nvme0n1", 00:44:21.058 "thin_provision": true, 00:44:21.058 "num_allocated_clusters": 0, 00:44:21.058 "snapshot": false, 00:44:21.058 "clone": false, 00:44:21.058 "esnap_clone": false 00:44:21.058 } 00:44:21.058 } 00:44:21.058 } 00:44:21.058 ]' 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:21.058 17:40:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:44:21.058 17:40:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:44:21.058 17:40:58 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cf58a87f-1844-491a-851f-57a6d1e6b427 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:44:21.318 [2024-11-26 17:40:58.679103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.679252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:21.319 [2024-11-26 17:40:58.679320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:21.319 [2024-11-26 17:40:58.679355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.682787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.682867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:21.319 [2024-11-26 17:40:58.682888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:44:21.319 [2024-11-26 17:40:58.682897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.683024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:21.319 [2024-11-26 17:40:58.684038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:21.319 [2024-11-26 17:40:58.684114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.684126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:21.319 [2024-11-26 17:40:58.684137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:44:21.319 [2024-11-26 17:40:58.684145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.684255] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fd46404a-df59-44ee-8649-89998e51716e 00:44:21.319 [2024-11-26 17:40:58.686748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.686783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:21.319 [2024-11-26 17:40:58.686794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:44:21.319 [2024-11-26 17:40:58.686805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.701302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.701371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:21.319 [2024-11-26 17:40:58.701384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:44:21.319 [2024-11-26 17:40:58.701401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.701595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.701634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:21.319 [2024-11-26 17:40:58.701647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:44:21.319 [2024-11-26 17:40:58.701671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.701722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.701737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:21.319 [2024-11-26 17:40:58.701750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:21.319 [2024-11-26 17:40:58.701772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.701821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:21.319 [2024-11-26 17:40:58.707824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.707853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:21.319 [2024-11-26 17:40:58.707866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.024 ms 00:44:21.319 [2024-11-26 17:40:58.707890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.707959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.707985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:21.319 [2024-11-26 17:40:58.707998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:21.319 [2024-11-26 17:40:58.708005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.708040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:21.319 [2024-11-26 17:40:58.708171] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:21.319 [2024-11-26 17:40:58.708188] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:21.319 [2024-11-26 17:40:58.708200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:21.319 [2024-11-26 17:40:58.708212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708221] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:21.319 [2024-11-26 17:40:58.708240] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:21.319 [2024-11-26 17:40:58.708253] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:21.319 [2024-11-26 17:40:58.708260] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:21.319 [2024-11-26 17:40:58.708271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.708279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:21.319 [2024-11-26 17:40:58.708290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:44:21.319 [2024-11-26 17:40:58.708298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.708384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.319 [2024-11-26 17:40:58.708392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:21.319 [2024-11-26 17:40:58.708403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:44:21.319 [2024-11-26 17:40:58.708410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.319 [2024-11-26 17:40:58.708527] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:21.319 [2024-11-26 17:40:58.708536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:21.319 [2024-11-26 17:40:58.708547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:21.319 [2024-11-26 17:40:58.708572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:21.319 [2024-11-26 17:40:58.708597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:21.319 [2024-11-26 17:40:58.708632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:21.319 [2024-11-26 17:40:58.708644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:21.319 [2024-11-26 17:40:58.708657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:21.319 [2024-11-26 17:40:58.708667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:21.319 [2024-11-26 17:40:58.708680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:21.319 [2024-11-26 17:40:58.708686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:21.319 [2024-11-26 17:40:58.708707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:21.319 [2024-11-26 17:40:58.708742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:21.319 [2024-11-26 17:40:58.708764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:21.319 [2024-11-26 17:40:58.708787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:21.319 [2024-11-26 17:40:58.708808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:21.319 [2024-11-26 17:40:58.708835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:21.319 [2024-11-26 17:40:58.708850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:21.319 [2024-11-26 17:40:58.708857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:21.319 [2024-11-26 17:40:58.708865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:21.319 [2024-11-26 17:40:58.708873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:21.319 [2024-11-26 17:40:58.708882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:21.319 [2024-11-26 17:40:58.708888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:21.319 [2024-11-26 17:40:58.708925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:21.319 [2024-11-26 17:40:58.708936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.708943] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:21.319 [2024-11-26 17:40:58.708958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:21.319 [2024-11-26 17:40:58.708971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:21.319 [2024-11-26 17:40:58.708988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.319 [2024-11-26 17:40:58.709000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:21.319 [2024-11-26 17:40:58.709018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:21.320 [2024-11-26 17:40:58.709029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:21.320 [2024-11-26 17:40:58.709044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:21.320 [2024-11-26 17:40:58.709055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:21.320 [2024-11-26 17:40:58.709068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:21.320 [2024-11-26 17:40:58.709083] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:21.320 [2024-11-26 17:40:58.709100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:21.320 [2024-11-26 17:40:58.709137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:21.320 [2024-11-26 17:40:58.709149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:21.320 [2024-11-26 17:40:58.709164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:21.320 [2024-11-26 17:40:58.709175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:21.320 [2024-11-26 17:40:58.709190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:21.320 [2024-11-26 17:40:58.709201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:21.320 [2024-11-26 17:40:58.709215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:21.320 [2024-11-26 17:40:58.709227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:21.320 [2024-11-26 17:40:58.709246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:21.320 [2024-11-26 17:40:58.709313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:21.320 [2024-11-26 17:40:58.709329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:21.320 [2024-11-26 17:40:58.709365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:21.320 [2024-11-26 17:40:58.709377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:21.320 [2024-11-26 17:40:58.709392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:21.320 [2024-11-26 17:40:58.709405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.320 [2024-11-26 17:40:58.709421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:21.320 [2024-11-26 17:40:58.709434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:44:21.320 [2024-11-26 17:40:58.709449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.320 [2024-11-26 17:40:58.709554] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:44:21.320 [2024-11-26 17:40:58.709645] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:44:24.615 [2024-11-26 17:41:01.806441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.806520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:24.616 [2024-11-26 17:41:01.806538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3102.858 ms 00:44:24.616 [2024-11-26 17:41:01.806550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.853804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.853877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:24.616 [2024-11-26 17:41:01.853892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.927 ms 00:44:24.616 [2024-11-26 17:41:01.853904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.854157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.854175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:24.616 [2024-11-26 17:41:01.854202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:24.616 [2024-11-26 17:41:01.854223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.923769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.923837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:24.616 [2024-11-26 17:41:01.923850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.642 ms 00:44:24.616 [2024-11-26 17:41:01.923877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.923997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.924009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:24.616 [2024-11-26 17:41:01.924018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:24.616 [2024-11-26 17:41:01.924029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.924831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.924848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:24.616 [2024-11-26 17:41:01.924858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:44:24.616 [2024-11-26 17:41:01.924867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.924990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.925001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:24.616 [2024-11-26 17:41:01.925030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:44:24.616 [2024-11-26 17:41:01.925045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.950951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.951012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:24.616 [2024-11-26 17:41:01.951025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.915 ms 00:44:24.616 [2024-11-26 17:41:01.951052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.616 [2024-11-26 17:41:01.965768] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:24.616 [2024-11-26 17:41:01.993842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.616 [2024-11-26 17:41:01.993925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:24.616 [2024-11-26 17:41:01.993944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.705 ms 00:44:24.616 [2024-11-26 17:41:01.993954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.874 [2024-11-26 17:41:02.100479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.874 [2024-11-26 17:41:02.100560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:24.874 [2024-11-26 17:41:02.100594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.414 ms 00:44:24.874 [2024-11-26 17:41:02.100604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.874 [2024-11-26 17:41:02.100889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.874 [2024-11-26 17:41:02.100902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:24.874 [2024-11-26 17:41:02.100917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:44:24.874 [2024-11-26 17:41:02.100926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.874 [2024-11-26 17:41:02.137784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.874 [2024-11-26 17:41:02.137842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:24.874 [2024-11-26 17:41:02.137859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.889 ms 00:44:24.874 [2024-11-26 17:41:02.137871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.874 [2024-11-26 17:41:02.177148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.874 [2024-11-26 17:41:02.177238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:24.874 [2024-11-26 17:41:02.177258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.261 ms 00:44:24.874 [2024-11-26 17:41:02.177267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.874 [2024-11-26 17:41:02.178217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.875 [2024-11-26 17:41:02.178241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:24.875 [2024-11-26 17:41:02.178255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:44:24.875 [2024-11-26 17:41:02.178265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.875 [2024-11-26 17:41:02.290118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.875 [2024-11-26 17:41:02.290312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:24.875 [2024-11-26 17:41:02.290358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.014 ms 00:44:24.875 [2024-11-26 17:41:02.290367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:25.135 [2024-11-26 17:41:02.335520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:25.135 [2024-11-26 17:41:02.335600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:25.135 [2024-11-26 17:41:02.335637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.080 ms 00:44:25.135 [2024-11-26 17:41:02.335647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:25.135 [2024-11-26 17:41:02.376643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:25.135 [2024-11-26 17:41:02.376812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:44:25.135 [2024-11-26 17:41:02.376836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.918 ms 00:44:25.135 [2024-11-26 17:41:02.376844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:25.135 [2024-11-26 17:41:02.414263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:25.135 [2024-11-26 17:41:02.414335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:25.135 [2024-11-26 17:41:02.414351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.386 ms 00:44:25.135 [2024-11-26 17:41:02.414360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:25.135 [2024-11-26 17:41:02.414451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:25.135 [2024-11-26 17:41:02.414463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:25.135 [2024-11-26 17:41:02.414478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:25.135 [2024-11-26 17:41:02.414486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:25.135 [2024-11-26 17:41:02.414578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:25.135 [2024-11-26 17:41:02.414587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:25.135 [2024-11-26 17:41:02.414597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:44:25.135 [2024-11-26 17:41:02.414605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:25.135 [2024-11-26 17:41:02.416008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:25.135 [2024-11-26 17:41:02.420708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3743.737 ms, result 0 00:44:25.135 [2024-11-26 17:41:02.421695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:25.135 { 00:44:25.135 "name": "ftl0", 00:44:25.135 "uuid": "fd46404a-df59-44ee-8649-89998e51716e" 00:44:25.135 } 00:44:25.135 17:41:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:44:25.135 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:44:25.135 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:25.135 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:44:25.135 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:25.135 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:25.135 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:25.394 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:44:25.654 [ 00:44:25.654 { 00:44:25.654 "name": "ftl0", 00:44:25.654 "aliases": [ 00:44:25.654 "fd46404a-df59-44ee-8649-89998e51716e" 00:44:25.654 ], 00:44:25.654 "product_name": "FTL disk", 00:44:25.654 "block_size": 4096, 00:44:25.654 "num_blocks": 23592960, 00:44:25.654 "uuid": "fd46404a-df59-44ee-8649-89998e51716e", 00:44:25.654 "assigned_rate_limits": { 00:44:25.654 "rw_ios_per_sec": 0, 00:44:25.654 "rw_mbytes_per_sec": 0, 00:44:25.654 "r_mbytes_per_sec": 0, 00:44:25.654 "w_mbytes_per_sec": 0 00:44:25.654 }, 00:44:25.654 "claimed": false, 00:44:25.654 "zoned": false, 00:44:25.654 "supported_io_types": { 00:44:25.654 "read": true, 00:44:25.654 "write": true, 00:44:25.654 "unmap": true, 00:44:25.654 "flush": true, 00:44:25.654 "reset": false, 00:44:25.654 "nvme_admin": false, 00:44:25.654 "nvme_io": false, 00:44:25.654 "nvme_io_md": false, 00:44:25.654 "write_zeroes": true, 00:44:25.654 "zcopy": false, 00:44:25.654 "get_zone_info": false, 00:44:25.654 "zone_management": false, 00:44:25.654 "zone_append": false, 00:44:25.654 "compare": false, 00:44:25.654 "compare_and_write": false, 00:44:25.654 "abort": false, 00:44:25.654 "seek_hole": false, 00:44:25.654 "seek_data": false, 00:44:25.654 "copy": false, 00:44:25.654 "nvme_iov_md": false 00:44:25.654 }, 00:44:25.654 "driver_specific": { 00:44:25.654 "ftl": { 00:44:25.654 "base_bdev": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:25.654 "cache": "nvc0n1p0" 00:44:25.654 } 00:44:25.654 } 00:44:25.654 } 00:44:25.654 ] 00:44:25.654 17:41:02 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:44:25.654 17:41:02 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:44:25.654 17:41:02 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:44:25.914 17:41:03 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:44:25.914 17:41:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:44:25.914 17:41:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:44:25.914 { 00:44:25.914 "name": "ftl0", 00:44:25.914 "aliases": [ 00:44:25.914 "fd46404a-df59-44ee-8649-89998e51716e" 00:44:25.914 ], 00:44:25.914 "product_name": "FTL disk", 00:44:25.914 "block_size": 4096, 00:44:25.914 "num_blocks": 23592960, 00:44:25.914 "uuid": "fd46404a-df59-44ee-8649-89998e51716e", 00:44:25.914 "assigned_rate_limits": { 00:44:25.914 "rw_ios_per_sec": 0, 00:44:25.914 "rw_mbytes_per_sec": 0, 00:44:25.914 "r_mbytes_per_sec": 0, 00:44:25.914 "w_mbytes_per_sec": 0 00:44:25.914 }, 00:44:25.914 "claimed": false, 00:44:25.914 "zoned": false, 00:44:25.914 "supported_io_types": { 00:44:25.914 "read": true, 00:44:25.914 "write": true, 00:44:25.914 "unmap": true, 00:44:25.914 "flush": true, 00:44:25.914 "reset": false, 00:44:25.914 "nvme_admin": false, 00:44:25.914 "nvme_io": false, 00:44:25.914 "nvme_io_md": false, 00:44:25.914 "write_zeroes": true, 00:44:25.914 "zcopy": false, 00:44:25.914 "get_zone_info": false, 00:44:25.914 "zone_management": false, 00:44:25.914 "zone_append": false, 00:44:25.914 "compare": false, 00:44:25.914 "compare_and_write": false, 00:44:25.914 "abort": false, 00:44:25.914 "seek_hole": false, 00:44:25.914 "seek_data": false, 00:44:25.914 "copy": false, 00:44:25.914 "nvme_iov_md": false 00:44:25.914 }, 00:44:25.914 "driver_specific": { 00:44:25.914 "ftl": { 00:44:25.914 "base_bdev": "cf58a87f-1844-491a-851f-57a6d1e6b427", 00:44:25.914 "cache": "nvc0n1p0" 00:44:25.914 } 00:44:25.914 } 00:44:25.914 } 00:44:25.914 ]' 00:44:25.914 17:41:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:44:26.173 17:41:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:44:26.173 17:41:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:26.173 [2024-11-26 17:41:03.545414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.173 [2024-11-26 17:41:03.545492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:26.173 [2024-11-26 17:41:03.545511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:26.173 [2024-11-26 17:41:03.545522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.173 [2024-11-26 17:41:03.545575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:26.173 [2024-11-26 17:41:03.550396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.173 [2024-11-26 17:41:03.550434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:26.173 [2024-11-26 17:41:03.550454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:44:26.173 [2024-11-26 17:41:03.550463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.173 [2024-11-26 17:41:03.551019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.173 [2024-11-26 17:41:03.551036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:26.173 [2024-11-26 17:41:03.551047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:44:26.173 [2024-11-26 17:41:03.551055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.173 [2024-11-26 17:41:03.553960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.173 [2024-11-26 17:41:03.553983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:26.173 [2024-11-26 17:41:03.553997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:44:26.173 [2024-11-26 17:41:03.554006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.173 [2024-11-26 17:41:03.559874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.173 [2024-11-26 17:41:03.559908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:26.173 [2024-11-26 17:41:03.559921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:44:26.173 [2024-11-26 17:41:03.559929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.173 [2024-11-26 17:41:03.601424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.173 [2024-11-26 17:41:03.601486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:26.173 [2024-11-26 17:41:03.601507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.468 ms 00:44:26.173 [2024-11-26 17:41:03.601516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.625075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.434 [2024-11-26 17:41:03.625134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:26.434 [2024-11-26 17:41:03.625156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.477 ms 00:44:26.434 [2024-11-26 17:41:03.625164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.625457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.434 [2024-11-26 17:41:03.625470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:26.434 [2024-11-26 17:41:03.625494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:44:26.434 [2024-11-26 17:41:03.625503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.663245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.434 [2024-11-26 17:41:03.663396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:26.434 [2024-11-26 17:41:03.663417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.778 ms 00:44:26.434 [2024-11-26 17:41:03.663426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.699596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.434 [2024-11-26 17:41:03.699645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:26.434 [2024-11-26 17:41:03.699663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.143 ms 00:44:26.434 [2024-11-26 17:41:03.699687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.737511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.434 [2024-11-26 17:41:03.737558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:26.434 [2024-11-26 17:41:03.737574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.809 ms 00:44:26.434 [2024-11-26 17:41:03.737583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.777267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.434 [2024-11-26 17:41:03.777315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:26.434 [2024-11-26 17:41:03.777331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.540 ms 00:44:26.434 [2024-11-26 17:41:03.777340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.434 [2024-11-26 17:41:03.777453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:26.434 [2024-11-26 17:41:03.777472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:26.434 [2024-11-26 17:41:03.777834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.777977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:26.435 [2024-11-26 17:41:03.778600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:26.436 [2024-11-26 17:41:03.778620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:26.436 [2024-11-26 17:41:03.778650] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:44:26.436 [2024-11-26 17:41:03.778660] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:26.436 [2024-11-26 17:41:03.778671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:26.436 [2024-11-26 17:41:03.778684] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:26.436 [2024-11-26 17:41:03.778697] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:26.436 [2024-11-26 17:41:03.778706] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:26.436 [2024-11-26 17:41:03.778718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:26.436 [2024-11-26 17:41:03.778727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:26.436 [2024-11-26 17:41:03.778738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:26.436 [2024-11-26 17:41:03.778745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:26.436 [2024-11-26 17:41:03.778757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.436 [2024-11-26 17:41:03.778767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:26.436 [2024-11-26 17:41:03.778780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:44:26.436 [2024-11-26 17:41:03.778789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.436 [2024-11-26 17:41:03.801788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.436 [2024-11-26 17:41:03.801883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:26.436 [2024-11-26 17:41:03.801905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.999 ms 00:44:26.436 [2024-11-26 17:41:03.801914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.436 [2024-11-26 17:41:03.802657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:26.436 [2024-11-26 17:41:03.802670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:26.436 [2024-11-26 17:41:03.802682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:44:26.436 [2024-11-26 17:41:03.802690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.695 [2024-11-26 17:41:03.879226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.696 [2024-11-26 17:41:03.879284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:26.696 [2024-11-26 17:41:03.879301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.696 [2024-11-26 17:41:03.879310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.696 [2024-11-26 17:41:03.879489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.696 [2024-11-26 17:41:03.879501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:26.696 [2024-11-26 17:41:03.879513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.696 [2024-11-26 17:41:03.879521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.696 [2024-11-26 17:41:03.879603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.696 [2024-11-26 17:41:03.879634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:26.696 [2024-11-26 17:41:03.879649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.696 [2024-11-26 17:41:03.879657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.696 [2024-11-26 17:41:03.879697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.696 [2024-11-26 17:41:03.879707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:26.696 [2024-11-26 17:41:03.879718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.696 [2024-11-26 17:41:03.879746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.696 [2024-11-26 17:41:04.030459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.696 [2024-11-26 17:41:04.030654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:26.696 [2024-11-26 17:41:04.030680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.696 [2024-11-26 17:41:04.030690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.147181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.147281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:26.956 [2024-11-26 17:41:04.147304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.147317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.147519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.147535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:26.956 [2024-11-26 17:41:04.147560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.147572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.147664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.147690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:26.956 [2024-11-26 17:41:04.147707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.147719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.147941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.147963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:26.956 [2024-11-26 17:41:04.147977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.147990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.148072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.148086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:26.956 [2024-11-26 17:41:04.148099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.148108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.148180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.148191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:26.956 [2024-11-26 17:41:04.148207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.148219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.148300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:26.956 [2024-11-26 17:41:04.148311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:26.956 [2024-11-26 17:41:04.148324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:26.956 [2024-11-26 17:41:04.148333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:26.956 [2024-11-26 17:41:04.148570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 604.311 ms, result 0 00:44:26.956 true 00:44:26.956 17:41:04 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78944 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78944 ']' 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78944 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78944 00:44:26.956 killing process with pid 78944 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78944' 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78944 00:44:26.956 17:41:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78944 00:44:36.941 17:41:14 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:44:37.881 65536+0 records in 00:44:37.881 65536+0 records out 00:44:37.881 268435456 bytes (268 MB, 256 MiB) copied, 0.942903 s, 285 MB/s 00:44:37.881 17:41:15 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:38.138 [2024-11-26 17:41:15.343623] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:44:38.138 [2024-11-26 17:41:15.343814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79180 ] 00:44:38.138 [2024-11-26 17:41:15.535065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:38.395 [2024-11-26 17:41:15.701219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:38.962 [2024-11-26 17:41:16.163310] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:38.962 [2024-11-26 17:41:16.163418] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:38.962 [2024-11-26 17:41:16.330892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.330985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:38.962 [2024-11-26 17:41:16.331004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:38.962 [2024-11-26 17:41:16.331014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.962 [2024-11-26 17:41:16.334847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.334890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:38.962 [2024-11-26 17:41:16.334903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.815 ms 00:44:38.962 [2024-11-26 17:41:16.334913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.962 [2024-11-26 17:41:16.335052] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:38.962 [2024-11-26 17:41:16.336340] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:38.962 [2024-11-26 17:41:16.336375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.336387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:38.962 [2024-11-26 17:41:16.336398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:44:38.962 [2024-11-26 17:41:16.336407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.962 [2024-11-26 17:41:16.339081] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:38.962 [2024-11-26 17:41:16.364871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.364920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:38.962 [2024-11-26 17:41:16.364936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.841 ms 00:44:38.962 [2024-11-26 17:41:16.364947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.962 [2024-11-26 17:41:16.365068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.365083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:38.962 [2024-11-26 17:41:16.365095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:44:38.962 [2024-11-26 17:41:16.365104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.962 [2024-11-26 17:41:16.378436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.378569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:38.962 [2024-11-26 17:41:16.378589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.299 ms 00:44:38.962 [2024-11-26 17:41:16.378600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.962 [2024-11-26 17:41:16.378789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.962 [2024-11-26 17:41:16.378808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:38.962 [2024-11-26 17:41:16.378820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:44:38.962 [2024-11-26 17:41:16.378830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.963 [2024-11-26 17:41:16.378874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.963 [2024-11-26 17:41:16.378886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:38.963 [2024-11-26 17:41:16.378896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:44:38.963 [2024-11-26 17:41:16.378906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.963 [2024-11-26 17:41:16.378936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:38.963 [2024-11-26 17:41:16.385660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.963 [2024-11-26 17:41:16.385697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:38.963 [2024-11-26 17:41:16.385711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.747 ms 00:44:38.963 [2024-11-26 17:41:16.385721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.963 [2024-11-26 17:41:16.385782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.963 [2024-11-26 17:41:16.385795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:38.963 [2024-11-26 17:41:16.385806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:38.963 [2024-11-26 17:41:16.385815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.963 [2024-11-26 17:41:16.385844] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:38.963 [2024-11-26 17:41:16.385869] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:38.963 [2024-11-26 17:41:16.385911] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:38.963 [2024-11-26 17:41:16.385932] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:38.963 [2024-11-26 17:41:16.386042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:38.963 [2024-11-26 17:41:16.386055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:38.963 [2024-11-26 17:41:16.386068] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:38.963 [2024-11-26 17:41:16.386084] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386096] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386107] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:38.963 [2024-11-26 17:41:16.386116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:38.963 [2024-11-26 17:41:16.386126] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:38.963 [2024-11-26 17:41:16.386134] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:38.963 [2024-11-26 17:41:16.386145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.963 [2024-11-26 17:41:16.386155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:38.963 [2024-11-26 17:41:16.386165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:44:38.963 [2024-11-26 17:41:16.386174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.963 [2024-11-26 17:41:16.386267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.963 [2024-11-26 17:41:16.386282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:38.963 [2024-11-26 17:41:16.386291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:38.963 [2024-11-26 17:41:16.386300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:38.963 [2024-11-26 17:41:16.386413] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:38.963 [2024-11-26 17:41:16.386426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:38.963 [2024-11-26 17:41:16.386437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:38.963 [2024-11-26 17:41:16.386464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:38.963 [2024-11-26 17:41:16.386491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:38.963 [2024-11-26 17:41:16.386508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:38.963 [2024-11-26 17:41:16.386532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:38.963 [2024-11-26 17:41:16.386540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:38.963 [2024-11-26 17:41:16.386549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:38.963 [2024-11-26 17:41:16.386557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:38.963 [2024-11-26 17:41:16.386565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:38.963 [2024-11-26 17:41:16.386584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:38.963 [2024-11-26 17:41:16.386630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:38.963 [2024-11-26 17:41:16.386657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:38.963 [2024-11-26 17:41:16.386682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:38.963 [2024-11-26 17:41:16.386706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:38.963 [2024-11-26 17:41:16.386731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:38.963 [2024-11-26 17:41:16.386747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:38.963 [2024-11-26 17:41:16.386755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:38.963 [2024-11-26 17:41:16.386763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:38.963 [2024-11-26 17:41:16.386771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:38.963 [2024-11-26 17:41:16.386779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:38.963 [2024-11-26 17:41:16.386787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:38.963 [2024-11-26 17:41:16.386803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:38.963 [2024-11-26 17:41:16.386813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386821] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:38.963 [2024-11-26 17:41:16.386830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:38.963 [2024-11-26 17:41:16.386842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:38.963 [2024-11-26 17:41:16.386861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:38.963 [2024-11-26 17:41:16.386870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:38.963 [2024-11-26 17:41:16.386878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:38.963 [2024-11-26 17:41:16.386887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:38.963 [2024-11-26 17:41:16.386895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:38.963 [2024-11-26 17:41:16.386903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:38.963 [2024-11-26 17:41:16.386915] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:38.963 [2024-11-26 17:41:16.386927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:38.963 [2024-11-26 17:41:16.386938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:38.963 [2024-11-26 17:41:16.386948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:38.963 [2024-11-26 17:41:16.386958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:38.963 [2024-11-26 17:41:16.386967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:38.963 [2024-11-26 17:41:16.386976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:38.963 [2024-11-26 17:41:16.386985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:38.963 [2024-11-26 17:41:16.386994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:38.963 [2024-11-26 17:41:16.387003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:38.963 [2024-11-26 17:41:16.387012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:38.963 [2024-11-26 17:41:16.387021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:38.963 [2024-11-26 17:41:16.387029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:38.964 [2024-11-26 17:41:16.387038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:38.964 [2024-11-26 17:41:16.387046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:38.964 [2024-11-26 17:41:16.387055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:38.964 [2024-11-26 17:41:16.387063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:38.964 [2024-11-26 17:41:16.387073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:38.964 [2024-11-26 17:41:16.387083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:38.964 [2024-11-26 17:41:16.387092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:38.964 [2024-11-26 17:41:16.387101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:38.964 [2024-11-26 17:41:16.387109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:38.964 [2024-11-26 17:41:16.387119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:38.964 [2024-11-26 17:41:16.387133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:38.964 [2024-11-26 17:41:16.387142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:44:38.964 [2024-11-26 17:41:16.387152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.444728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.444915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:39.223 [2024-11-26 17:41:16.444941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.595 ms 00:44:39.223 [2024-11-26 17:41:16.444955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.445218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.445235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:39.223 [2024-11-26 17:41:16.445246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:44:39.223 [2024-11-26 17:41:16.445255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.514639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.514722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:39.223 [2024-11-26 17:41:16.514738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.486 ms 00:44:39.223 [2024-11-26 17:41:16.514749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.514898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.514912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:39.223 [2024-11-26 17:41:16.514923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:39.223 [2024-11-26 17:41:16.514932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.515741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.515757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:39.223 [2024-11-26 17:41:16.515780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:44:39.223 [2024-11-26 17:41:16.515790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.515949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.515975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:39.223 [2024-11-26 17:41:16.515991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:44:39.223 [2024-11-26 17:41:16.516003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.541479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.541540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:39.223 [2024-11-26 17:41:16.541555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.483 ms 00:44:39.223 [2024-11-26 17:41:16.541564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.561003] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:44:39.223 [2024-11-26 17:41:16.561047] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:39.223 [2024-11-26 17:41:16.561060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.561069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:39.223 [2024-11-26 17:41:16.561079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.350 ms 00:44:39.223 [2024-11-26 17:41:16.561087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.589994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.590041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:39.223 [2024-11-26 17:41:16.590055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.868 ms 00:44:39.223 [2024-11-26 17:41:16.590063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.608023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.608059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:39.223 [2024-11-26 17:41:16.608070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.905 ms 00:44:39.223 [2024-11-26 17:41:16.608078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.625500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.625579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:39.223 [2024-11-26 17:41:16.625634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.382 ms 00:44:39.223 [2024-11-26 17:41:16.625666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.223 [2024-11-26 17:41:16.626497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.223 [2024-11-26 17:41:16.626566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:39.223 [2024-11-26 17:41:16.626620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:44:39.223 [2024-11-26 17:41:16.626657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.721953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.722151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:39.483 [2024-11-26 17:41:16.722220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.419 ms 00:44:39.483 [2024-11-26 17:41:16.722244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.736652] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:39.483 [2024-11-26 17:41:16.763842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.764017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:39.483 [2024-11-26 17:41:16.764073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.490 ms 00:44:39.483 [2024-11-26 17:41:16.764082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.764286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.764301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:39.483 [2024-11-26 17:41:16.764311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:39.483 [2024-11-26 17:41:16.764319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.764389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.764398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:39.483 [2024-11-26 17:41:16.764407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:44:39.483 [2024-11-26 17:41:16.764415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.764463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.764482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:39.483 [2024-11-26 17:41:16.764490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:44:39.483 [2024-11-26 17:41:16.764498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.764540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:39.483 [2024-11-26 17:41:16.764550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.764558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:39.483 [2024-11-26 17:41:16.764567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:39.483 [2024-11-26 17:41:16.764574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.800578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.800631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:39.483 [2024-11-26 17:41:16.800645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.049 ms 00:44:39.483 [2024-11-26 17:41:16.800653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.800800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.483 [2024-11-26 17:41:16.800813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:39.483 [2024-11-26 17:41:16.800821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:44:39.483 [2024-11-26 17:41:16.800828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.483 [2024-11-26 17:41:16.802194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:39.483 [2024-11-26 17:41:16.807336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 471.823 ms, result 0 00:44:39.483 [2024-11-26 17:41:16.808239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:39.483 [2024-11-26 17:41:16.826186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:40.421  [2024-11-26T17:41:19.245Z] Copying: 33/256 [MB] (33 MBps) [2024-11-26T17:41:20.181Z] Copying: 65/256 [MB] (31 MBps) [2024-11-26T17:41:21.116Z] Copying: 98/256 [MB] (33 MBps) [2024-11-26T17:41:22.056Z] Copying: 131/256 [MB] (33 MBps) [2024-11-26T17:41:22.992Z] Copying: 164/256 [MB] (32 MBps) [2024-11-26T17:41:23.928Z] Copying: 197/256 [MB] (32 MBps) [2024-11-26T17:41:24.864Z] Copying: 230/256 [MB] (33 MBps) [2024-11-26T17:41:24.864Z] Copying: 256/256 [MB] (average 33 MBps)[2024-11-26 17:41:24.586495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:47.418 [2024-11-26 17:41:24.603504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.418 [2024-11-26 17:41:24.603647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:47.419 [2024-11-26 17:41:24.603669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:47.419 [2024-11-26 17:41:24.603703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.603734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:47.419 [2024-11-26 17:41:24.608617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.608648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:47.419 [2024-11-26 17:41:24.608658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.875 ms 00:44:47.419 [2024-11-26 17:41:24.608666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.610863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.610900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:47.419 [2024-11-26 17:41:24.610912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:44:47.419 [2024-11-26 17:41:24.610921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.617844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.617890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:47.419 [2024-11-26 17:41:24.617903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.918 ms 00:44:47.419 [2024-11-26 17:41:24.617912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.623751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.623783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:47.419 [2024-11-26 17:41:24.623793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.811 ms 00:44:47.419 [2024-11-26 17:41:24.623801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.668107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.668317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:47.419 [2024-11-26 17:41:24.668361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.320 ms 00:44:47.419 [2024-11-26 17:41:24.668371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.694620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.694713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:47.419 [2024-11-26 17:41:24.694735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.169 ms 00:44:47.419 [2024-11-26 17:41:24.694743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.694950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.694963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:47.419 [2024-11-26 17:41:24.694972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:44:47.419 [2024-11-26 17:41:24.694998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.738242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.738441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:47.419 [2024-11-26 17:41:24.738469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.304 ms 00:44:47.419 [2024-11-26 17:41:24.738480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.781248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.781335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:47.419 [2024-11-26 17:41:24.781352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.693 ms 00:44:47.419 [2024-11-26 17:41:24.781366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.419 [2024-11-26 17:41:24.823044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.419 [2024-11-26 17:41:24.823130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:47.419 [2024-11-26 17:41:24.823147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.636 ms 00:44:47.419 [2024-11-26 17:41:24.823157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.681 [2024-11-26 17:41:24.866134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.681 [2024-11-26 17:41:24.866242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:47.681 [2024-11-26 17:41:24.866260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.875 ms 00:44:47.681 [2024-11-26 17:41:24.866269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.681 [2024-11-26 17:41:24.866435] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:47.681 [2024-11-26 17:41:24.866455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:47.681 [2024-11-26 17:41:24.866999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:47.682 [2024-11-26 17:41:24.867478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:47.682 [2024-11-26 17:41:24.867489] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:44:47.682 [2024-11-26 17:41:24.867501] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:47.682 [2024-11-26 17:41:24.867514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:47.682 [2024-11-26 17:41:24.867526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:47.682 [2024-11-26 17:41:24.867542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:47.682 [2024-11-26 17:41:24.867554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:47.682 [2024-11-26 17:41:24.867563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:47.682 [2024-11-26 17:41:24.867573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:47.682 [2024-11-26 17:41:24.867583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:47.682 [2024-11-26 17:41:24.867594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:47.682 [2024-11-26 17:41:24.867604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.682 [2024-11-26 17:41:24.867637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:47.682 [2024-11-26 17:41:24.867648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:44:47.682 [2024-11-26 17:41:24.867666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:24.890513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.682 [2024-11-26 17:41:24.890590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:47.682 [2024-11-26 17:41:24.890624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.847 ms 00:44:47.682 [2024-11-26 17:41:24.890642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:24.891343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.682 [2024-11-26 17:41:24.891362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:47.682 [2024-11-26 17:41:24.891371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:44:47.682 [2024-11-26 17:41:24.891380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:24.950738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.682 [2024-11-26 17:41:24.950818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:47.682 [2024-11-26 17:41:24.950834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.682 [2024-11-26 17:41:24.950843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:24.950979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.682 [2024-11-26 17:41:24.950989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:47.682 [2024-11-26 17:41:24.950998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.682 [2024-11-26 17:41:24.951005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:24.951069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.682 [2024-11-26 17:41:24.951081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:47.682 [2024-11-26 17:41:24.951089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.682 [2024-11-26 17:41:24.951097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:24.951122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.682 [2024-11-26 17:41:24.951134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:47.682 [2024-11-26 17:41:24.951142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.682 [2024-11-26 17:41:24.951150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.682 [2024-11-26 17:41:25.088846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.682 [2024-11-26 17:41:25.089054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:47.682 [2024-11-26 17:41:25.089080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.682 [2024-11-26 17:41:25.089090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.205540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.205622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:47.942 [2024-11-26 17:41:25.205640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.205649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.205764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.205774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:47.942 [2024-11-26 17:41:25.205783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.205793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.205826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.205836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:47.942 [2024-11-26 17:41:25.205851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.205859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.205985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.206002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:47.942 [2024-11-26 17:41:25.206014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.206023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.206094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.206107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:47.942 [2024-11-26 17:41:25.206117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.206133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.206192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.206207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:47.942 [2024-11-26 17:41:25.206218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.206226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.206290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:47.942 [2024-11-26 17:41:25.206303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:47.942 [2024-11-26 17:41:25.206317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:47.942 [2024-11-26 17:41:25.206329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.942 [2024-11-26 17:41:25.206522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 604.165 ms, result 0 00:44:49.317 00:44:49.317 00:44:49.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:49.317 17:41:26 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79295 00:44:49.317 17:41:26 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79295 00:44:49.317 17:41:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79295 ']' 00:44:49.317 17:41:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:49.317 17:41:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:49.317 17:41:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:49.317 17:41:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:49.317 17:41:26 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:44:49.317 17:41:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:44:49.317 [2024-11-26 17:41:26.751427] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:44:49.317 [2024-11-26 17:41:26.751560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79295 ] 00:44:49.575 [2024-11-26 17:41:26.929962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:49.833 [2024-11-26 17:41:27.072869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.818 17:41:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.818 17:41:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:44:50.818 17:41:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:44:51.077 [2024-11-26 17:41:28.315791] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:51.077 [2024-11-26 17:41:28.315929] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:51.077 [2024-11-26 17:41:28.497761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.077 [2024-11-26 17:41:28.497818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:51.077 [2024-11-26 17:41:28.497837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:51.077 [2024-11-26 17:41:28.497846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.077 [2024-11-26 17:41:28.501598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.077 [2024-11-26 17:41:28.501648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:51.077 [2024-11-26 17:41:28.501662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.735 ms 00:44:51.077 [2024-11-26 17:41:28.501669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.077 [2024-11-26 17:41:28.501779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:51.077 [2024-11-26 17:41:28.502844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:51.077 [2024-11-26 17:41:28.502877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.077 [2024-11-26 17:41:28.502886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:51.077 [2024-11-26 17:41:28.502895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:44:51.077 [2024-11-26 17:41:28.502905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.077 [2024-11-26 17:41:28.505451] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:51.337 [2024-11-26 17:41:28.525907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.525953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:51.337 [2024-11-26 17:41:28.525966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.500 ms 00:44:51.337 [2024-11-26 17:41:28.525980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.526088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.526105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:51.337 [2024-11-26 17:41:28.526116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:44:51.337 [2024-11-26 17:41:28.526130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.539356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.539508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:51.337 [2024-11-26 17:41:28.539543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.187 ms 00:44:51.337 [2024-11-26 17:41:28.539557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.539785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.539806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:51.337 [2024-11-26 17:41:28.539817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:44:51.337 [2024-11-26 17:41:28.539840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.539876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.539890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:51.337 [2024-11-26 17:41:28.539899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:51.337 [2024-11-26 17:41:28.539911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.539940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:51.337 [2024-11-26 17:41:28.545829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.545858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:51.337 [2024-11-26 17:41:28.545874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.905 ms 00:44:51.337 [2024-11-26 17:41:28.545883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.545946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.545956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:51.337 [2024-11-26 17:41:28.545974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:51.337 [2024-11-26 17:41:28.545982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.546011] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:51.337 [2024-11-26 17:41:28.546034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:51.337 [2024-11-26 17:41:28.546085] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:51.337 [2024-11-26 17:41:28.546103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:51.337 [2024-11-26 17:41:28.546195] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:51.337 [2024-11-26 17:41:28.546206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:51.337 [2024-11-26 17:41:28.546224] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:51.337 [2024-11-26 17:41:28.546235] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546246] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546255] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:51.337 [2024-11-26 17:41:28.546266] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:51.337 [2024-11-26 17:41:28.546273] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:51.337 [2024-11-26 17:41:28.546286] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:51.337 [2024-11-26 17:41:28.546295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.546305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:51.337 [2024-11-26 17:41:28.546313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:44:51.337 [2024-11-26 17:41:28.546326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.546402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.337 [2024-11-26 17:41:28.546413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:51.337 [2024-11-26 17:41:28.546420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:44:51.337 [2024-11-26 17:41:28.546429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.337 [2024-11-26 17:41:28.546519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:51.337 [2024-11-26 17:41:28.546531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:51.337 [2024-11-26 17:41:28.546540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:51.337 [2024-11-26 17:41:28.546569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:51.337 [2024-11-26 17:41:28.546602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:51.337 [2024-11-26 17:41:28.546637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:51.337 [2024-11-26 17:41:28.546648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:51.337 [2024-11-26 17:41:28.546655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:51.337 [2024-11-26 17:41:28.546667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:51.337 [2024-11-26 17:41:28.546673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:51.337 [2024-11-26 17:41:28.546684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:51.337 [2024-11-26 17:41:28.546702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:51.337 [2024-11-26 17:41:28.546741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:51.337 [2024-11-26 17:41:28.546774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:51.337 [2024-11-26 17:41:28.546780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:51.337 [2024-11-26 17:41:28.546792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:51.337 [2024-11-26 17:41:28.546798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:51.338 [2024-11-26 17:41:28.546809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:51.338 [2024-11-26 17:41:28.546816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:51.338 [2024-11-26 17:41:28.546827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:51.338 [2024-11-26 17:41:28.546833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:51.338 [2024-11-26 17:41:28.546844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:51.338 [2024-11-26 17:41:28.546851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:51.338 [2024-11-26 17:41:28.546864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:51.338 [2024-11-26 17:41:28.546871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:51.338 [2024-11-26 17:41:28.546882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:51.338 [2024-11-26 17:41:28.546889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:51.338 [2024-11-26 17:41:28.546906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:51.338 [2024-11-26 17:41:28.546914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:51.338 [2024-11-26 17:41:28.546929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:51.338 [2024-11-26 17:41:28.546935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:51.338 [2024-11-26 17:41:28.546946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:51.338 [2024-11-26 17:41:28.546954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:51.338 [2024-11-26 17:41:28.546964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:51.338 [2024-11-26 17:41:28.546976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:51.338 [2024-11-26 17:41:28.546988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:51.338 [2024-11-26 17:41:28.546996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:51.338 [2024-11-26 17:41:28.547008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:51.338 [2024-11-26 17:41:28.547015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:51.338 [2024-11-26 17:41:28.547026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:51.338 [2024-11-26 17:41:28.547032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:51.338 [2024-11-26 17:41:28.547047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:51.338 [2024-11-26 17:41:28.547057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:51.338 [2024-11-26 17:41:28.547075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:51.338 [2024-11-26 17:41:28.547090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:51.338 [2024-11-26 17:41:28.547122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:51.338 [2024-11-26 17:41:28.547137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:51.338 [2024-11-26 17:41:28.547146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:51.338 [2024-11-26 17:41:28.547156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:51.338 [2024-11-26 17:41:28.547163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:51.338 [2024-11-26 17:41:28.547178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:51.338 [2024-11-26 17:41:28.547185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:51.338 [2024-11-26 17:41:28.547195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:51.338 [2024-11-26 17:41:28.547203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:51.338 [2024-11-26 17:41:28.547252] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:51.338 [2024-11-26 17:41:28.547262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:51.338 [2024-11-26 17:41:28.547287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:51.338 [2024-11-26 17:41:28.547297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:51.338 [2024-11-26 17:41:28.547306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:51.338 [2024-11-26 17:41:28.547321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.547330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:51.338 [2024-11-26 17:41:28.547340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:44:51.338 [2024-11-26 17:41:28.547352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.597921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.597984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:51.338 [2024-11-26 17:41:28.598002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.583 ms 00:44:51.338 [2024-11-26 17:41:28.598014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.598217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.598228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:51.338 [2024-11-26 17:41:28.598240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:44:51.338 [2024-11-26 17:41:28.598247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.653240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.653300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:51.338 [2024-11-26 17:41:28.653334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.063 ms 00:44:51.338 [2024-11-26 17:41:28.653342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.653476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.653487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:51.338 [2024-11-26 17:41:28.653501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:51.338 [2024-11-26 17:41:28.653509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.654383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.654409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:51.338 [2024-11-26 17:41:28.654423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:44:51.338 [2024-11-26 17:41:28.654433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.654587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.654600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:51.338 [2024-11-26 17:41:28.654621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:44:51.338 [2024-11-26 17:41:28.654630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.682290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.682350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:51.338 [2024-11-26 17:41:28.682368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.675 ms 00:44:51.338 [2024-11-26 17:41:28.682377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.721311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:44:51.338 [2024-11-26 17:41:28.721374] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:51.338 [2024-11-26 17:41:28.721418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.721428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:51.338 [2024-11-26 17:41:28.721444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.938 ms 00:44:51.338 [2024-11-26 17:41:28.721466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.752571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.752657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:51.338 [2024-11-26 17:41:28.752678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.015 ms 00:44:51.338 [2024-11-26 17:41:28.752688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.338 [2024-11-26 17:41:28.772647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.338 [2024-11-26 17:41:28.772748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:51.338 [2024-11-26 17:41:28.772796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.827 ms 00:44:51.338 [2024-11-26 17:41:28.772805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.790386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.790420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:51.597 [2024-11-26 17:41:28.790433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.515 ms 00:44:51.597 [2024-11-26 17:41:28.790441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.791240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.791273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:51.597 [2024-11-26 17:41:28.791286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:44:51.597 [2024-11-26 17:41:28.791295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.890057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.890139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:51.597 [2024-11-26 17:41:28.890161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.913 ms 00:44:51.597 [2024-11-26 17:41:28.890171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.901282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:51.597 [2024-11-26 17:41:28.929993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.930115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:51.597 [2024-11-26 17:41:28.930130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.762 ms 00:44:51.597 [2024-11-26 17:41:28.930143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.930320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.930337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:51.597 [2024-11-26 17:41:28.930346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:51.597 [2024-11-26 17:41:28.930359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.930428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.930442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:51.597 [2024-11-26 17:41:28.930451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:44:51.597 [2024-11-26 17:41:28.930469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.930494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.930508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:51.597 [2024-11-26 17:41:28.930517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:51.597 [2024-11-26 17:41:28.930530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.930575] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:51.597 [2024-11-26 17:41:28.930594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.930635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:51.597 [2024-11-26 17:41:28.930649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:51.597 [2024-11-26 17:41:28.930662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.968749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.968796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:51.597 [2024-11-26 17:41:28.968812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.130 ms 00:44:51.597 [2024-11-26 17:41:28.968836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.968955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.597 [2024-11-26 17:41:28.968966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:51.597 [2024-11-26 17:41:28.968982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:44:51.597 [2024-11-26 17:41:28.968990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.597 [2024-11-26 17:41:28.970454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:51.597 [2024-11-26 17:41:28.975306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 473.192 ms, result 0 00:44:51.597 [2024-11-26 17:41:28.976450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:51.597 Some configs were skipped because the RPC state that can call them passed over. 00:44:51.597 17:41:29 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:44:51.855 [2024-11-26 17:41:29.244491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.855 [2024-11-26 17:41:29.244694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:44:51.855 [2024-11-26 17:41:29.244765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:44:51.855 [2024-11-26 17:41:29.244827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.855 [2024-11-26 17:41:29.244938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.118 ms, result 0 00:44:51.855 true 00:44:51.855 17:41:29 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:44:52.113 [2024-11-26 17:41:29.459786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.113 [2024-11-26 17:41:29.459964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:44:52.113 [2024-11-26 17:41:29.460003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:44:52.113 [2024-11-26 17:41:29.460018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.113 [2024-11-26 17:41:29.460102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.465 ms, result 0 00:44:52.113 true 00:44:52.113 17:41:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79295 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79295 ']' 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79295 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79295 00:44:52.113 killing process with pid 79295 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79295' 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79295 00:44:52.113 17:41:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79295 00:44:53.490 [2024-11-26 17:41:30.773145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.773227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:53.490 [2024-11-26 17:41:30.773242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:53.490 [2024-11-26 17:41:30.773251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.773277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:53.490 [2024-11-26 17:41:30.777915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.777955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:53.490 [2024-11-26 17:41:30.777969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.624 ms 00:44:53.490 [2024-11-26 17:41:30.777978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.778270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.778281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:53.490 [2024-11-26 17:41:30.778290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:44:53.490 [2024-11-26 17:41:30.778298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.781572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.781639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:53.490 [2024-11-26 17:41:30.781652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:44:53.490 [2024-11-26 17:41:30.781660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.787262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.787309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:53.490 [2024-11-26 17:41:30.787323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.571 ms 00:44:53.490 [2024-11-26 17:41:30.787331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.803180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.803268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:53.490 [2024-11-26 17:41:30.803309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.816 ms 00:44:53.490 [2024-11-26 17:41:30.803317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.814298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.814334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:53.490 [2024-11-26 17:41:30.814346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.932 ms 00:44:53.490 [2024-11-26 17:41:30.814354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.814498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.814509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:53.490 [2024-11-26 17:41:30.814520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:44:53.490 [2024-11-26 17:41:30.814527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.830139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.830167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:53.490 [2024-11-26 17:41:30.830185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.622 ms 00:44:53.490 [2024-11-26 17:41:30.830192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.844826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.844854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:53.490 [2024-11-26 17:41:30.844875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.609 ms 00:44:53.490 [2024-11-26 17:41:30.844882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.858745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.858772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:53.490 [2024-11-26 17:41:30.858787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.833 ms 00:44:53.490 [2024-11-26 17:41:30.858794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.873176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.490 [2024-11-26 17:41:30.873248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:53.490 [2024-11-26 17:41:30.873272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.342 ms 00:44:53.490 [2024-11-26 17:41:30.873280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.490 [2024-11-26 17:41:30.873332] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:53.490 [2024-11-26 17:41:30.873347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:53.490 [2024-11-26 17:41:30.873526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.873989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:53.491 [2024-11-26 17:41:30.874321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:53.492 [2024-11-26 17:41:30.874590] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:53.492 [2024-11-26 17:41:30.874622] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:44:53.492 [2024-11-26 17:41:30.874637] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:53.492 [2024-11-26 17:41:30.874650] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:53.492 [2024-11-26 17:41:30.874660] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:53.492 [2024-11-26 17:41:30.874678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:53.492 [2024-11-26 17:41:30.874686] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:53.492 [2024-11-26 17:41:30.874702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:53.492 [2024-11-26 17:41:30.874714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:53.492 [2024-11-26 17:41:30.874729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:53.492 [2024-11-26 17:41:30.874738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:53.492 [2024-11-26 17:41:30.874754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.492 [2024-11-26 17:41:30.874763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:53.492 [2024-11-26 17:41:30.874778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:44:53.492 [2024-11-26 17:41:30.874797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.492 [2024-11-26 17:41:30.895714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.492 [2024-11-26 17:41:30.895787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:53.492 [2024-11-26 17:41:30.895816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.923 ms 00:44:53.492 [2024-11-26 17:41:30.895826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.492 [2024-11-26 17:41:30.896480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.492 [2024-11-26 17:41:30.896495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:53.492 [2024-11-26 17:41:30.896519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:44:53.492 [2024-11-26 17:41:30.896531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.750 [2024-11-26 17:41:30.971399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.751 [2024-11-26 17:41:30.971481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:53.751 [2024-11-26 17:41:30.971502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.751 [2024-11-26 17:41:30.971511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.751 [2024-11-26 17:41:30.971755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.751 [2024-11-26 17:41:30.971769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:53.751 [2024-11-26 17:41:30.971790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.751 [2024-11-26 17:41:30.971799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.751 [2024-11-26 17:41:30.971872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.751 [2024-11-26 17:41:30.971885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:53.751 [2024-11-26 17:41:30.971905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.751 [2024-11-26 17:41:30.971914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.751 [2024-11-26 17:41:30.971940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.751 [2024-11-26 17:41:30.971949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:53.751 [2024-11-26 17:41:30.971963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.751 [2024-11-26 17:41:30.971988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.751 [2024-11-26 17:41:31.110941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.751 [2024-11-26 17:41:31.111022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:53.751 [2024-11-26 17:41:31.111045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.751 [2024-11-26 17:41:31.111054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.219851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.219938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:54.009 [2024-11-26 17:41:31.219965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.219974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.220134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:54.009 [2024-11-26 17:41:31.220154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.220162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.220208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:54.009 [2024-11-26 17:41:31.220220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.220228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.220380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:54.009 [2024-11-26 17:41:31.220393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.220401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.220459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:54.009 [2024-11-26 17:41:31.220472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.220480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.220547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:54.009 [2024-11-26 17:41:31.220563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.220571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.009 [2024-11-26 17:41:31.220658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:54.009 [2024-11-26 17:41:31.220684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.009 [2024-11-26 17:41:31.220692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.009 [2024-11-26 17:41:31.220874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.560 ms, result 0 00:44:54.944 17:41:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:44:54.944 17:41:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:55.201 [2024-11-26 17:41:32.456626] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:44:55.201 [2024-11-26 17:41:32.456763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79364 ] 00:44:55.201 [2024-11-26 17:41:32.635380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.461 [2024-11-26 17:41:32.774476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.034 [2024-11-26 17:41:33.199583] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:56.034 [2024-11-26 17:41:33.199779] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:56.034 [2024-11-26 17:41:33.361837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.361909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:56.034 [2024-11-26 17:41:33.361925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:56.034 [2024-11-26 17:41:33.361933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.365124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.365207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:56.034 [2024-11-26 17:41:33.365228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:44:56.034 [2024-11-26 17:41:33.365237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.365335] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:56.034 [2024-11-26 17:41:33.366330] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:56.034 [2024-11-26 17:41:33.366367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.366377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:56.034 [2024-11-26 17:41:33.366385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:44:56.034 [2024-11-26 17:41:33.366393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.368928] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:56.034 [2024-11-26 17:41:33.388581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.388650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:56.034 [2024-11-26 17:41:33.388664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.691 ms 00:44:56.034 [2024-11-26 17:41:33.388672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.388775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.388787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:56.034 [2024-11-26 17:41:33.388796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:44:56.034 [2024-11-26 17:41:33.388804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.401549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.401584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:56.034 [2024-11-26 17:41:33.401596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.724 ms 00:44:56.034 [2024-11-26 17:41:33.401604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.401745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.401760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:56.034 [2024-11-26 17:41:33.401769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:56.034 [2024-11-26 17:41:33.401778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.401829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.401838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:56.034 [2024-11-26 17:41:33.401846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:56.034 [2024-11-26 17:41:33.401854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.401881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:56.034 [2024-11-26 17:41:33.407636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.407665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:56.034 [2024-11-26 17:41:33.407674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.776 ms 00:44:56.034 [2024-11-26 17:41:33.407698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.407750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.407760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:56.034 [2024-11-26 17:41:33.407769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:56.034 [2024-11-26 17:41:33.407776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.407801] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:56.034 [2024-11-26 17:41:33.407824] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:56.034 [2024-11-26 17:41:33.407861] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:56.034 [2024-11-26 17:41:33.407878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:56.034 [2024-11-26 17:41:33.407979] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:56.034 [2024-11-26 17:41:33.407990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:56.034 [2024-11-26 17:41:33.408000] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:56.034 [2024-11-26 17:41:33.408013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:56.034 [2024-11-26 17:41:33.408022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:56.034 [2024-11-26 17:41:33.408030] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:56.034 [2024-11-26 17:41:33.408037] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:56.034 [2024-11-26 17:41:33.408045] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:56.034 [2024-11-26 17:41:33.408052] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:56.034 [2024-11-26 17:41:33.408060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.408068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:56.034 [2024-11-26 17:41:33.408075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:44:56.034 [2024-11-26 17:41:33.408082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.408155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.034 [2024-11-26 17:41:33.408166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:56.034 [2024-11-26 17:41:33.408174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:44:56.034 [2024-11-26 17:41:33.408181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.034 [2024-11-26 17:41:33.408288] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:56.034 [2024-11-26 17:41:33.408299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:56.034 [2024-11-26 17:41:33.408307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:56.034 [2024-11-26 17:41:33.408315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.034 [2024-11-26 17:41:33.408322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:56.034 [2024-11-26 17:41:33.408329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:56.034 [2024-11-26 17:41:33.408336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:56.034 [2024-11-26 17:41:33.408343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:56.034 [2024-11-26 17:41:33.408350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:56.034 [2024-11-26 17:41:33.408358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:56.034 [2024-11-26 17:41:33.408366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:56.034 [2024-11-26 17:41:33.408387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:56.034 [2024-11-26 17:41:33.408393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:56.034 [2024-11-26 17:41:33.408401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:56.035 [2024-11-26 17:41:33.408408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:56.035 [2024-11-26 17:41:33.408415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:56.035 [2024-11-26 17:41:33.408429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:56.035 [2024-11-26 17:41:33.408449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:56.035 [2024-11-26 17:41:33.408469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:56.035 [2024-11-26 17:41:33.408488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:56.035 [2024-11-26 17:41:33.408507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:56.035 [2024-11-26 17:41:33.408526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:56.035 [2024-11-26 17:41:33.408539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:56.035 [2024-11-26 17:41:33.408545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:56.035 [2024-11-26 17:41:33.408551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:56.035 [2024-11-26 17:41:33.408557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:56.035 [2024-11-26 17:41:33.408563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:56.035 [2024-11-26 17:41:33.408570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:56.035 [2024-11-26 17:41:33.408583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:56.035 [2024-11-26 17:41:33.408592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408598] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:56.035 [2024-11-26 17:41:33.408606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:56.035 [2024-11-26 17:41:33.408617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.035 [2024-11-26 17:41:33.408632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:56.035 [2024-11-26 17:41:33.408638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:56.035 [2024-11-26 17:41:33.408661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:56.035 [2024-11-26 17:41:33.408670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:56.035 [2024-11-26 17:41:33.408676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:56.035 [2024-11-26 17:41:33.408682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:56.035 [2024-11-26 17:41:33.408691] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:56.035 [2024-11-26 17:41:33.408701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:56.035 [2024-11-26 17:41:33.408717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:56.035 [2024-11-26 17:41:33.408725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:56.035 [2024-11-26 17:41:33.408732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:56.035 [2024-11-26 17:41:33.408740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:56.035 [2024-11-26 17:41:33.408747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:56.035 [2024-11-26 17:41:33.408754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:56.035 [2024-11-26 17:41:33.408760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:56.035 [2024-11-26 17:41:33.408767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:56.035 [2024-11-26 17:41:33.408774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:56.035 [2024-11-26 17:41:33.408809] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:56.035 [2024-11-26 17:41:33.408817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:56.035 [2024-11-26 17:41:33.408833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:56.035 [2024-11-26 17:41:33.408841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:56.035 [2024-11-26 17:41:33.408848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:56.035 [2024-11-26 17:41:33.408857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.035 [2024-11-26 17:41:33.408880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:56.035 [2024-11-26 17:41:33.408887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:44:56.035 [2024-11-26 17:41:33.408895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.035 [2024-11-26 17:41:33.458743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.035 [2024-11-26 17:41:33.458890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:56.035 [2024-11-26 17:41:33.458912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.874 ms 00:44:56.035 [2024-11-26 17:41:33.458922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.035 [2024-11-26 17:41:33.459129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.035 [2024-11-26 17:41:33.459140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:56.035 [2024-11-26 17:41:33.459148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:44:56.035 [2024-11-26 17:41:33.459156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.540367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.540421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:56.304 [2024-11-26 17:41:33.540435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.342 ms 00:44:56.304 [2024-11-26 17:41:33.540444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.540570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.540581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:56.304 [2024-11-26 17:41:33.540590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:56.304 [2024-11-26 17:41:33.540598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.541429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.541451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:56.304 [2024-11-26 17:41:33.541469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:44:56.304 [2024-11-26 17:41:33.541477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.541616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.541641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:56.304 [2024-11-26 17:41:33.541651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:44:56.304 [2024-11-26 17:41:33.541659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.564765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.564805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:56.304 [2024-11-26 17:41:33.564818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.124 ms 00:44:56.304 [2024-11-26 17:41:33.564827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.585549] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:44:56.304 [2024-11-26 17:41:33.585586] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:56.304 [2024-11-26 17:41:33.585601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.585621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:56.304 [2024-11-26 17:41:33.585631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.662 ms 00:44:56.304 [2024-11-26 17:41:33.585639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.614771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.614810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:56.304 [2024-11-26 17:41:33.614823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.104 ms 00:44:56.304 [2024-11-26 17:41:33.614831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.632427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.632461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:56.304 [2024-11-26 17:41:33.632471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.551 ms 00:44:56.304 [2024-11-26 17:41:33.632479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.649752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.649786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:56.304 [2024-11-26 17:41:33.649797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.237 ms 00:44:56.304 [2024-11-26 17:41:33.649804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.650546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.650572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:56.304 [2024-11-26 17:41:33.650586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:44:56.304 [2024-11-26 17:41:33.650597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.304 [2024-11-26 17:41:33.748453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.304 [2024-11-26 17:41:33.748539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:56.304 [2024-11-26 17:41:33.748556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.996 ms 00:44:56.304 [2024-11-26 17:41:33.748565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.563 [2024-11-26 17:41:33.760626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:56.563 [2024-11-26 17:41:33.788283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.788364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:56.564 [2024-11-26 17:41:33.788382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.573 ms 00:44:56.564 [2024-11-26 17:41:33.788400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.788586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.788599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:56.564 [2024-11-26 17:41:33.788629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:56.564 [2024-11-26 17:41:33.788655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.788745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.788758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:56.564 [2024-11-26 17:41:33.788767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:44:56.564 [2024-11-26 17:41:33.788783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.788832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.788848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:56.564 [2024-11-26 17:41:33.788857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:44:56.564 [2024-11-26 17:41:33.788866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.788914] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:56.564 [2024-11-26 17:41:33.788926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.788934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:56.564 [2024-11-26 17:41:33.788943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:56.564 [2024-11-26 17:41:33.788951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.828162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.828329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:56.564 [2024-11-26 17:41:33.828383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.261 ms 00:44:56.564 [2024-11-26 17:41:33.828394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.828542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.564 [2024-11-26 17:41:33.828554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:56.564 [2024-11-26 17:41:33.828565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:44:56.564 [2024-11-26 17:41:33.828573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.564 [2024-11-26 17:41:33.829988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:56.564 [2024-11-26 17:41:33.835823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 468.674 ms, result 0 00:44:56.564 [2024-11-26 17:41:33.836801] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:56.564 [2024-11-26 17:41:33.856014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:57.502  [2024-11-26T17:41:35.887Z] Copying: 30/256 [MB] (30 MBps) [2024-11-26T17:41:37.264Z] Copying: 57/256 [MB] (27 MBps) [2024-11-26T17:41:38.202Z] Copying: 85/256 [MB] (27 MBps) [2024-11-26T17:41:39.149Z] Copying: 112/256 [MB] (27 MBps) [2024-11-26T17:41:40.084Z] Copying: 139/256 [MB] (26 MBps) [2024-11-26T17:41:41.019Z] Copying: 167/256 [MB] (27 MBps) [2024-11-26T17:41:41.955Z] Copying: 195/256 [MB] (28 MBps) [2024-11-26T17:41:42.890Z] Copying: 224/256 [MB] (28 MBps) [2024-11-26T17:41:43.150Z] Copying: 252/256 [MB] (28 MBps) [2024-11-26T17:41:43.150Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-26 17:41:42.946670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:05.704 [2024-11-26 17:41:42.963505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.704 [2024-11-26 17:41:42.963648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:05.704 [2024-11-26 17:41:42.963728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:05.705 [2024-11-26 17:41:42.963764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:42.963837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:05.705 [2024-11-26 17:41:42.968853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:42.968926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:05.705 [2024-11-26 17:41:42.968966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.953 ms 00:45:05.705 [2024-11-26 17:41:42.969004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:42.969280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:42.969327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:05.705 [2024-11-26 17:41:42.969378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:45:05.705 [2024-11-26 17:41:42.969415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:42.972572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:42.972634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:05.705 [2024-11-26 17:41:42.972669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:45:05.705 [2024-11-26 17:41:42.972704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:42.978590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:42.978662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:05.705 [2024-11-26 17:41:42.978701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.819 ms 00:45:05.705 [2024-11-26 17:41:42.978733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:43.020113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:43.020250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:05.705 [2024-11-26 17:41:43.020312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.361 ms 00:45:05.705 [2024-11-26 17:41:43.020346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:43.042698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:43.042816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:05.705 [2024-11-26 17:41:43.042867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.297 ms 00:45:05.705 [2024-11-26 17:41:43.042891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:43.043076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:43.043122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:05.705 [2024-11-26 17:41:43.043186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:45:05.705 [2024-11-26 17:41:43.043222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:43.082595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:43.082783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:05.705 [2024-11-26 17:41:43.082829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.394 ms 00:45:05.705 [2024-11-26 17:41:43.082881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.705 [2024-11-26 17:41:43.124778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.705 [2024-11-26 17:41:43.124949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:05.705 [2024-11-26 17:41:43.124995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.830 ms 00:45:05.705 [2024-11-26 17:41:43.125029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.965 [2024-11-26 17:41:43.163451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.965 [2024-11-26 17:41:43.163663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:05.965 [2024-11-26 17:41:43.163709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.382 ms 00:45:05.965 [2024-11-26 17:41:43.163744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.965 [2024-11-26 17:41:43.199836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.965 [2024-11-26 17:41:43.199968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:05.965 [2024-11-26 17:41:43.200028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.016 ms 00:45:05.965 [2024-11-26 17:41:43.200051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.965 [2024-11-26 17:41:43.200143] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:05.965 [2024-11-26 17:41:43.200191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:05.965 [2024-11-26 17:41:43.200256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:05.965 [2024-11-26 17:41:43.200308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.200996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:05.966 [2024-11-26 17:41:43.201457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:05.967 [2024-11-26 17:41:43.201578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:05.967 [2024-11-26 17:41:43.201587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:45:05.967 [2024-11-26 17:41:43.201597] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:05.967 [2024-11-26 17:41:43.201605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:05.967 [2024-11-26 17:41:43.201625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:05.967 [2024-11-26 17:41:43.201634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:05.967 [2024-11-26 17:41:43.201642] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:05.967 [2024-11-26 17:41:43.201653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:05.967 [2024-11-26 17:41:43.201668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:05.967 [2024-11-26 17:41:43.201675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:05.967 [2024-11-26 17:41:43.201683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:05.967 [2024-11-26 17:41:43.201692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.967 [2024-11-26 17:41:43.201702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:05.967 [2024-11-26 17:41:43.201712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:45:05.967 [2024-11-26 17:41:43.201720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.967 [2024-11-26 17:41:43.222946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.967 [2024-11-26 17:41:43.223029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:05.967 [2024-11-26 17:41:43.223061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.239 ms 00:45:05.967 [2024-11-26 17:41:43.223071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.967 [2024-11-26 17:41:43.223756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:05.967 [2024-11-26 17:41:43.223772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:05.967 [2024-11-26 17:41:43.223785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:45:05.967 [2024-11-26 17:41:43.223796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.967 [2024-11-26 17:41:43.284282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:05.967 [2024-11-26 17:41:43.284357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:05.967 [2024-11-26 17:41:43.284371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:05.967 [2024-11-26 17:41:43.284385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.967 [2024-11-26 17:41:43.284514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:05.967 [2024-11-26 17:41:43.284525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:05.967 [2024-11-26 17:41:43.284533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:05.967 [2024-11-26 17:41:43.284541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.967 [2024-11-26 17:41:43.284600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:05.967 [2024-11-26 17:41:43.284629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:05.967 [2024-11-26 17:41:43.284637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:05.967 [2024-11-26 17:41:43.284661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:05.967 [2024-11-26 17:41:43.284686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:05.967 [2024-11-26 17:41:43.284696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:05.967 [2024-11-26 17:41:43.284704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:05.967 [2024-11-26 17:41:43.284712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.424898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.424978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:06.227 [2024-11-26 17:41:43.424993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.425019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.535471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.535699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:06.227 [2024-11-26 17:41:43.535721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.535732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.535867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.535881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:06.227 [2024-11-26 17:41:43.535891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.535900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.535934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.535950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:06.227 [2024-11-26 17:41:43.535959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.535968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.536119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.536132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:06.227 [2024-11-26 17:41:43.536141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.536150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.536190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.536202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:06.227 [2024-11-26 17:41:43.536215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.536224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.536270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.536281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:06.227 [2024-11-26 17:41:43.536289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.536297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.536348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:06.227 [2024-11-26 17:41:43.536362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:06.227 [2024-11-26 17:41:43.536370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:06.227 [2024-11-26 17:41:43.536378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:06.227 [2024-11-26 17:41:43.536542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.128 ms, result 0 00:45:07.640 00:45:07.640 00:45:07.640 17:41:44 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:45:07.640 17:41:44 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:45:07.899 17:41:45 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:07.899 [2024-11-26 17:41:45.329026] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:45:07.900 [2024-11-26 17:41:45.329157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79502 ] 00:45:08.159 [2024-11-26 17:41:45.508876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:08.419 [2024-11-26 17:41:45.649132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:08.678 [2024-11-26 17:41:46.066879] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:08.678 [2024-11-26 17:41:46.066959] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:08.938 [2024-11-26 17:41:46.229673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.229733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:08.938 [2024-11-26 17:41:46.229747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:08.938 [2024-11-26 17:41:46.229772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.232875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.232912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:08.938 [2024-11-26 17:41:46.232922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:45:08.938 [2024-11-26 17:41:46.232946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.233047] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:08.938 [2024-11-26 17:41:46.233996] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:08.938 [2024-11-26 17:41:46.234030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.234039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:08.938 [2024-11-26 17:41:46.234048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:45:08.938 [2024-11-26 17:41:46.234056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.236638] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:08.938 [2024-11-26 17:41:46.256302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.256418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:08.938 [2024-11-26 17:41:46.256440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.704 ms 00:45:08.938 [2024-11-26 17:41:46.256454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.256580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.256597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:08.938 [2024-11-26 17:41:46.256621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:45:08.938 [2024-11-26 17:41:46.256632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.270051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.270097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:08.938 [2024-11-26 17:41:46.270110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.392 ms 00:45:08.938 [2024-11-26 17:41:46.270118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.270304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.270321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:08.938 [2024-11-26 17:41:46.270329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:45:08.938 [2024-11-26 17:41:46.270338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.270375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.270384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:08.938 [2024-11-26 17:41:46.270392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:08.938 [2024-11-26 17:41:46.270399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.938 [2024-11-26 17:41:46.270426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:08.938 [2024-11-26 17:41:46.276402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.938 [2024-11-26 17:41:46.276477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:08.938 [2024-11-26 17:41:46.276508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.980 ms 00:45:08.938 [2024-11-26 17:41:46.276530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.939 [2024-11-26 17:41:46.276604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.939 [2024-11-26 17:41:46.276666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:08.939 [2024-11-26 17:41:46.276694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:08.939 [2024-11-26 17:41:46.276726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.939 [2024-11-26 17:41:46.276780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:08.939 [2024-11-26 17:41:46.276826] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:08.939 [2024-11-26 17:41:46.276896] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:08.939 [2024-11-26 17:41:46.276952] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:08.939 [2024-11-26 17:41:46.277078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:08.939 [2024-11-26 17:41:46.277122] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:08.939 [2024-11-26 17:41:46.277165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:08.939 [2024-11-26 17:41:46.277214] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277252] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277293] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:08.939 [2024-11-26 17:41:46.277321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:08.939 [2024-11-26 17:41:46.277347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:08.939 [2024-11-26 17:41:46.277382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:08.939 [2024-11-26 17:41:46.277411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.939 [2024-11-26 17:41:46.277438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:08.939 [2024-11-26 17:41:46.277474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:45:08.939 [2024-11-26 17:41:46.277511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.939 [2024-11-26 17:41:46.277620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.939 [2024-11-26 17:41:46.277638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:08.939 [2024-11-26 17:41:46.277647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:45:08.939 [2024-11-26 17:41:46.277654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.939 [2024-11-26 17:41:46.277755] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:08.939 [2024-11-26 17:41:46.277767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:08.939 [2024-11-26 17:41:46.277777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:08.939 [2024-11-26 17:41:46.277803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:08.939 [2024-11-26 17:41:46.277825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:08.939 [2024-11-26 17:41:46.277839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:08.939 [2024-11-26 17:41:46.277862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:08.939 [2024-11-26 17:41:46.277870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:08.939 [2024-11-26 17:41:46.277878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:08.939 [2024-11-26 17:41:46.277885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:08.939 [2024-11-26 17:41:46.277892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:08.939 [2024-11-26 17:41:46.277907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:08.939 [2024-11-26 17:41:46.277929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:08.939 [2024-11-26 17:41:46.277949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:08.939 [2024-11-26 17:41:46.277971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:08.939 [2024-11-26 17:41:46.277984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:08.939 [2024-11-26 17:41:46.277991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:08.939 [2024-11-26 17:41:46.277997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:08.939 [2024-11-26 17:41:46.278003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:08.939 [2024-11-26 17:41:46.278010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:08.939 [2024-11-26 17:41:46.278017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:08.939 [2024-11-26 17:41:46.278023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:08.939 [2024-11-26 17:41:46.278030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:08.939 [2024-11-26 17:41:46.278036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:08.939 [2024-11-26 17:41:46.278044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:08.939 [2024-11-26 17:41:46.278050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:08.939 [2024-11-26 17:41:46.278057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.278064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:08.939 [2024-11-26 17:41:46.278076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:08.939 [2024-11-26 17:41:46.278083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.278089] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:08.939 [2024-11-26 17:41:46.278097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:08.939 [2024-11-26 17:41:46.278109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:08.939 [2024-11-26 17:41:46.278117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:08.939 [2024-11-26 17:41:46.278125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:08.939 [2024-11-26 17:41:46.278132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:08.939 [2024-11-26 17:41:46.278139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:08.939 [2024-11-26 17:41:46.278146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:08.939 [2024-11-26 17:41:46.278153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:08.939 [2024-11-26 17:41:46.278160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:08.939 [2024-11-26 17:41:46.278170] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:08.939 [2024-11-26 17:41:46.278180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:08.940 [2024-11-26 17:41:46.278199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:08.940 [2024-11-26 17:41:46.278207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:08.940 [2024-11-26 17:41:46.278214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:08.940 [2024-11-26 17:41:46.278222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:08.940 [2024-11-26 17:41:46.278230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:08.940 [2024-11-26 17:41:46.278237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:08.940 [2024-11-26 17:41:46.278245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:08.940 [2024-11-26 17:41:46.278253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:08.940 [2024-11-26 17:41:46.278261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:08.940 [2024-11-26 17:41:46.278302] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:08.940 [2024-11-26 17:41:46.278311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:08.940 [2024-11-26 17:41:46.278327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:08.940 [2024-11-26 17:41:46.278336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:08.940 [2024-11-26 17:41:46.278343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:08.940 [2024-11-26 17:41:46.278351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.940 [2024-11-26 17:41:46.278365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:08.940 [2024-11-26 17:41:46.278373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:45:08.940 [2024-11-26 17:41:46.278381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.940 [2024-11-26 17:41:46.329428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.940 [2024-11-26 17:41:46.329628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:08.940 [2024-11-26 17:41:46.329671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.070 ms 00:45:08.940 [2024-11-26 17:41:46.329694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:08.940 [2024-11-26 17:41:46.329965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:08.940 [2024-11-26 17:41:46.330006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:08.940 [2024-11-26 17:41:46.330036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:45:08.940 [2024-11-26 17:41:46.330066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.394784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.394933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:09.200 [2024-11-26 17:41:46.394972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.792 ms 00:45:09.200 [2024-11-26 17:41:46.394997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.395167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.395213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:09.200 [2024-11-26 17:41:46.395247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:09.200 [2024-11-26 17:41:46.395284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.396144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.396191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:09.200 [2024-11-26 17:41:46.396230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:45:09.200 [2024-11-26 17:41:46.396260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.396430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.396471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:09.200 [2024-11-26 17:41:46.396501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:45:09.200 [2024-11-26 17:41:46.396526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.420711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.420848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:09.200 [2024-11-26 17:41:46.420883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.180 ms 00:45:09.200 [2024-11-26 17:41:46.420904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.441558] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:45:09.200 [2024-11-26 17:41:46.441706] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:09.200 [2024-11-26 17:41:46.441753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.441775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:09.200 [2024-11-26 17:41:46.441797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.688 ms 00:45:09.200 [2024-11-26 17:41:46.441823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.471461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.471569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:09.200 [2024-11-26 17:41:46.471602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.559 ms 00:45:09.200 [2024-11-26 17:41:46.471630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.491108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.491234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:09.200 [2024-11-26 17:41:46.491265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.384 ms 00:45:09.200 [2024-11-26 17:41:46.491285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.511003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.511046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:09.200 [2024-11-26 17:41:46.511059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.640 ms 00:45:09.200 [2024-11-26 17:41:46.511068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.511998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.512031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:09.200 [2024-11-26 17:41:46.512043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:45:09.200 [2024-11-26 17:41:46.512051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.611299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.200 [2024-11-26 17:41:46.611479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:09.200 [2024-11-26 17:41:46.611500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.406 ms 00:45:09.200 [2024-11-26 17:41:46.611509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.200 [2024-11-26 17:41:46.624202] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:09.460 [2024-11-26 17:41:46.652117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.652208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:09.460 [2024-11-26 17:41:46.652225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.495 ms 00:45:09.460 [2024-11-26 17:41:46.652243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.652437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.652450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:09.460 [2024-11-26 17:41:46.652460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:09.460 [2024-11-26 17:41:46.652468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.652538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.652548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:09.460 [2024-11-26 17:41:46.652557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:45:09.460 [2024-11-26 17:41:46.652569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.652637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.652670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:09.460 [2024-11-26 17:41:46.652679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:45:09.460 [2024-11-26 17:41:46.652688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.652734] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:09.460 [2024-11-26 17:41:46.652745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.652754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:09.460 [2024-11-26 17:41:46.652762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:09.460 [2024-11-26 17:41:46.652770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.693158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.693227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:09.460 [2024-11-26 17:41:46.693259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.441 ms 00:45:09.460 [2024-11-26 17:41:46.693269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.693470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.693491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:09.460 [2024-11-26 17:41:46.693506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:45:09.460 [2024-11-26 17:41:46.693517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.694993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:09.460 [2024-11-26 17:41:46.700967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 465.835 ms, result 0 00:45:09.460 [2024-11-26 17:41:46.701988] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:09.460 [2024-11-26 17:41:46.720199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:09.460  [2024-11-26T17:41:46.906Z] Copying: 4096/4096 [kB] (average 26 MBps)[2024-11-26 17:41:46.876463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:09.460 [2024-11-26 17:41:46.892585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.892654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:09.460 [2024-11-26 17:41:46.892676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:09.460 [2024-11-26 17:41:46.892684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.892710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:09.460 [2024-11-26 17:41:46.897480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.897509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:09.460 [2024-11-26 17:41:46.897520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:45:09.460 [2024-11-26 17:41:46.897529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.899660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.899695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:09.460 [2024-11-26 17:41:46.899706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.106 ms 00:45:09.460 [2024-11-26 17:41:46.899715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.460 [2024-11-26 17:41:46.903126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.460 [2024-11-26 17:41:46.903158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:09.460 [2024-11-26 17:41:46.903167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.394 ms 00:45:09.460 [2024-11-26 17:41:46.903175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:46.909027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:46.909085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:09.721 [2024-11-26 17:41:46.909095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.835 ms 00:45:09.721 [2024-11-26 17:41:46.909104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:46.944967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:46.944999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:09.721 [2024-11-26 17:41:46.945011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.875 ms 00:45:09.721 [2024-11-26 17:41:46.945035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:46.965852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:46.965895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:09.721 [2024-11-26 17:41:46.965908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.804 ms 00:45:09.721 [2024-11-26 17:41:46.965917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:46.966060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:46.966072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:09.721 [2024-11-26 17:41:46.966097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:45:09.721 [2024-11-26 17:41:46.966105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:47.002320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:47.002361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:09.721 [2024-11-26 17:41:47.002372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.265 ms 00:45:09.721 [2024-11-26 17:41:47.002381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:47.040632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:47.040692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:09.721 [2024-11-26 17:41:47.040705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.272 ms 00:45:09.721 [2024-11-26 17:41:47.040728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:47.074803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:47.074907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:09.721 [2024-11-26 17:41:47.074922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.074 ms 00:45:09.721 [2024-11-26 17:41:47.074931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:47.108010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.721 [2024-11-26 17:41:47.108051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:09.721 [2024-11-26 17:41:47.108062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.056 ms 00:45:09.721 [2024-11-26 17:41:47.108070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.721 [2024-11-26 17:41:47.108122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:09.721 [2024-11-26 17:41:47.108140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:09.721 [2024-11-26 17:41:47.108370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:09.722 [2024-11-26 17:41:47.108979] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:09.722 [2024-11-26 17:41:47.108986] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:45:09.722 [2024-11-26 17:41:47.108994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:09.722 [2024-11-26 17:41:47.109002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:09.722 [2024-11-26 17:41:47.109009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:09.722 [2024-11-26 17:41:47.109018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:09.722 [2024-11-26 17:41:47.109025] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:09.722 [2024-11-26 17:41:47.109032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:09.722 [2024-11-26 17:41:47.109045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:09.722 [2024-11-26 17:41:47.109051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:09.722 [2024-11-26 17:41:47.109058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:09.722 [2024-11-26 17:41:47.109067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.722 [2024-11-26 17:41:47.109075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:09.722 [2024-11-26 17:41:47.109084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:45:09.722 [2024-11-26 17:41:47.109092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.722 [2024-11-26 17:41:47.130033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.722 [2024-11-26 17:41:47.130067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:09.722 [2024-11-26 17:41:47.130079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.957 ms 00:45:09.722 [2024-11-26 17:41:47.130087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.722 [2024-11-26 17:41:47.130707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:09.722 [2024-11-26 17:41:47.130719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:09.722 [2024-11-26 17:41:47.130728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:45:09.722 [2024-11-26 17:41:47.130736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.982 [2024-11-26 17:41:47.189674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:09.982 [2024-11-26 17:41:47.189859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:09.982 [2024-11-26 17:41:47.189878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:09.982 [2024-11-26 17:41:47.189896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.982 [2024-11-26 17:41:47.190069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:09.982 [2024-11-26 17:41:47.190081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:09.982 [2024-11-26 17:41:47.190090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:09.982 [2024-11-26 17:41:47.190098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.982 [2024-11-26 17:41:47.190165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:09.982 [2024-11-26 17:41:47.190180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:09.982 [2024-11-26 17:41:47.190188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:09.982 [2024-11-26 17:41:47.190196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.982 [2024-11-26 17:41:47.190224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:09.982 [2024-11-26 17:41:47.190233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:09.982 [2024-11-26 17:41:47.190241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:09.982 [2024-11-26 17:41:47.190249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:09.982 [2024-11-26 17:41:47.327729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:09.982 [2024-11-26 17:41:47.327864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:09.982 [2024-11-26 17:41:47.327889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:09.982 [2024-11-26 17:41:47.327916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.432693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.432795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:10.242 [2024-11-26 17:41:47.432818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.432833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.433004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.433025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:10.242 [2024-11-26 17:41:47.433039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.433053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.433098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.433125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:10.242 [2024-11-26 17:41:47.433139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.433152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.433313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.433333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:10.242 [2024-11-26 17:41:47.433346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.433358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.433789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.433870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:10.242 [2024-11-26 17:41:47.433932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.433963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.434109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.434318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:10.242 [2024-11-26 17:41:47.434364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.434394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.434537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:10.242 [2024-11-26 17:41:47.434588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:10.242 [2024-11-26 17:41:47.434803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:10.242 [2024-11-26 17:41:47.434930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:10.242 [2024-11-26 17:41:47.435396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.766 ms, result 0 00:45:11.623 00:45:11.623 00:45:11.623 17:41:48 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79537 00:45:11.623 17:41:48 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:45:11.623 17:41:48 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79537 00:45:11.623 17:41:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79537 ']' 00:45:11.623 17:41:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:11.623 17:41:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:11.623 17:41:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:11.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:11.623 17:41:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:11.623 17:41:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:45:11.623 [2024-11-26 17:41:48.789184] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:45:11.623 [2024-11-26 17:41:48.789464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79537 ] 00:45:11.623 [2024-11-26 17:41:48.974598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:11.882 [2024-11-26 17:41:49.128138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:12.819 17:41:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:12.819 17:41:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:45:12.819 17:41:50 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:45:13.077 [2024-11-26 17:41:50.439456] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:13.077 [2024-11-26 17:41:50.439534] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:13.337 [2024-11-26 17:41:50.609118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.609185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:13.337 [2024-11-26 17:41:50.609204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:13.337 [2024-11-26 17:41:50.609214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.612959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.612997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:13.337 [2024-11-26 17:41:50.613008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:45:13.337 [2024-11-26 17:41:50.613016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.613114] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:13.337 [2024-11-26 17:41:50.614128] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:13.337 [2024-11-26 17:41:50.614211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.614222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:13.337 [2024-11-26 17:41:50.614233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:45:13.337 [2024-11-26 17:41:50.614243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.616788] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:13.337 [2024-11-26 17:41:50.637567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.637676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:13.337 [2024-11-26 17:41:50.637695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.823 ms 00:45:13.337 [2024-11-26 17:41:50.637708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.637807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.637825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:13.337 [2024-11-26 17:41:50.637835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:45:13.337 [2024-11-26 17:41:50.637847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.650748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.650825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:13.337 [2024-11-26 17:41:50.650838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.865 ms 00:45:13.337 [2024-11-26 17:41:50.650852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.651019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.651035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:13.337 [2024-11-26 17:41:50.651045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:45:13.337 [2024-11-26 17:41:50.651061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.651094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.651106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:13.337 [2024-11-26 17:41:50.651114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:13.337 [2024-11-26 17:41:50.651124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.651152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:13.337 [2024-11-26 17:41:50.657482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.657513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:13.337 [2024-11-26 17:41:50.657527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.329 ms 00:45:13.337 [2024-11-26 17:41:50.657535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.657598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.337 [2024-11-26 17:41:50.657621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:13.337 [2024-11-26 17:41:50.657637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:13.337 [2024-11-26 17:41:50.657646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.337 [2024-11-26 17:41:50.657673] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:13.337 [2024-11-26 17:41:50.657696] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:13.337 [2024-11-26 17:41:50.657746] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:13.337 [2024-11-26 17:41:50.657766] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:13.337 [2024-11-26 17:41:50.657863] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:13.337 [2024-11-26 17:41:50.657874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:13.337 [2024-11-26 17:41:50.657894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:13.337 [2024-11-26 17:41:50.657906] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:13.338 [2024-11-26 17:41:50.657918] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:13.338 [2024-11-26 17:41:50.657928] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:13.338 [2024-11-26 17:41:50.657938] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:13.338 [2024-11-26 17:41:50.657946] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:13.338 [2024-11-26 17:41:50.657959] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:13.338 [2024-11-26 17:41:50.657968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.338 [2024-11-26 17:41:50.657978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:13.338 [2024-11-26 17:41:50.657986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:45:13.338 [2024-11-26 17:41:50.657999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.338 [2024-11-26 17:41:50.658078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.338 [2024-11-26 17:41:50.658089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:13.338 [2024-11-26 17:41:50.658097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:13.338 [2024-11-26 17:41:50.658106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.338 [2024-11-26 17:41:50.658202] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:13.338 [2024-11-26 17:41:50.658215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:13.338 [2024-11-26 17:41:50.658224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:13.338 [2024-11-26 17:41:50.658252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:13.338 [2024-11-26 17:41:50.658279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:13.338 [2024-11-26 17:41:50.658299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:13.338 [2024-11-26 17:41:50.658308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:13.338 [2024-11-26 17:41:50.658315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:13.338 [2024-11-26 17:41:50.658324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:13.338 [2024-11-26 17:41:50.658332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:13.338 [2024-11-26 17:41:50.658341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:13.338 [2024-11-26 17:41:50.658357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:13.338 [2024-11-26 17:41:50.658392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:13.338 [2024-11-26 17:41:50.658419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:13.338 [2024-11-26 17:41:50.658442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:13.338 [2024-11-26 17:41:50.658467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:13.338 [2024-11-26 17:41:50.658490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:13.338 [2024-11-26 17:41:50.658507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:13.338 [2024-11-26 17:41:50.658516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:13.338 [2024-11-26 17:41:50.658523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:13.338 [2024-11-26 17:41:50.658532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:13.338 [2024-11-26 17:41:50.658539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:13.338 [2024-11-26 17:41:50.658555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:13.338 [2024-11-26 17:41:50.658572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:13.338 [2024-11-26 17:41:50.658579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658588] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:13.338 [2024-11-26 17:41:50.658598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:13.338 [2024-11-26 17:41:50.658619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:13.338 [2024-11-26 17:41:50.658638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:13.338 [2024-11-26 17:41:50.658645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:13.338 [2024-11-26 17:41:50.658654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:13.338 [2024-11-26 17:41:50.658661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:13.338 [2024-11-26 17:41:50.658670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:13.338 [2024-11-26 17:41:50.658678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:13.338 [2024-11-26 17:41:50.658689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:13.338 [2024-11-26 17:41:50.658698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:13.338 [2024-11-26 17:41:50.658719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:13.338 [2024-11-26 17:41:50.658727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:13.338 [2024-11-26 17:41:50.658741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:13.338 [2024-11-26 17:41:50.658749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:13.338 [2024-11-26 17:41:50.658773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:13.338 [2024-11-26 17:41:50.658781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:13.338 [2024-11-26 17:41:50.658793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:13.338 [2024-11-26 17:41:50.658800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:13.338 [2024-11-26 17:41:50.658812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:13.338 [2024-11-26 17:41:50.658819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:13.338 [2024-11-26 17:41:50.658831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:13.338 [2024-11-26 17:41:50.658838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:13.338 [2024-11-26 17:41:50.658851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:13.338 [2024-11-26 17:41:50.658859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:13.339 [2024-11-26 17:41:50.658870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:13.339 [2024-11-26 17:41:50.658879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:13.339 [2024-11-26 17:41:50.658897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:13.339 [2024-11-26 17:41:50.658905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:13.339 [2024-11-26 17:41:50.658917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:13.339 [2024-11-26 17:41:50.658925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:13.339 [2024-11-26 17:41:50.658938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.658946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:13.339 [2024-11-26 17:41:50.658959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:45:13.339 [2024-11-26 17:41:50.658972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.339 [2024-11-26 17:41:50.709645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.709806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:13.339 [2024-11-26 17:41:50.709846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.694 ms 00:45:13.339 [2024-11-26 17:41:50.709862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.339 [2024-11-26 17:41:50.710066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.710078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:13.339 [2024-11-26 17:41:50.710092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:45:13.339 [2024-11-26 17:41:50.710102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.339 [2024-11-26 17:41:50.765118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.765167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:13.339 [2024-11-26 17:41:50.765184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.090 ms 00:45:13.339 [2024-11-26 17:41:50.765209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.339 [2024-11-26 17:41:50.765326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.765337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:13.339 [2024-11-26 17:41:50.765348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:13.339 [2024-11-26 17:41:50.765357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.339 [2024-11-26 17:41:50.766170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.766191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:13.339 [2024-11-26 17:41:50.766203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:45:13.339 [2024-11-26 17:41:50.766211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.339 [2024-11-26 17:41:50.766346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.339 [2024-11-26 17:41:50.766358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:13.339 [2024-11-26 17:41:50.766369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:45:13.339 [2024-11-26 17:41:50.766378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.598 [2024-11-26 17:41:50.794683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.598 [2024-11-26 17:41:50.794732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:13.598 [2024-11-26 17:41:50.794750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.322 ms 00:45:13.598 [2024-11-26 17:41:50.794776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.598 [2024-11-26 17:41:50.829274] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:13.598 [2024-11-26 17:41:50.829389] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:13.598 [2024-11-26 17:41:50.829421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.598 [2024-11-26 17:41:50.829431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:13.598 [2024-11-26 17:41:50.829447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.529 ms 00:45:13.598 [2024-11-26 17:41:50.829471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.598 [2024-11-26 17:41:50.860199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.598 [2024-11-26 17:41:50.860264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:13.598 [2024-11-26 17:41:50.860286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.681 ms 00:45:13.598 [2024-11-26 17:41:50.860294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.598 [2024-11-26 17:41:50.878428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.598 [2024-11-26 17:41:50.878462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:13.598 [2024-11-26 17:41:50.878484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.074 ms 00:45:13.598 [2024-11-26 17:41:50.878493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.598 [2024-11-26 17:41:50.895211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.598 [2024-11-26 17:41:50.895292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:13.598 [2024-11-26 17:41:50.895314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.670 ms 00:45:13.599 [2024-11-26 17:41:50.895338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:50.896296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:50.896332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:13.599 [2024-11-26 17:41:50.896350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:45:13.599 [2024-11-26 17:41:50.896360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:50.999543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:50.999714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:13.599 [2024-11-26 17:41:50.999750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.338 ms 00:45:13.599 [2024-11-26 17:41:50.999762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:51.012226] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:13.599 [2024-11-26 17:41:51.040200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:51.040314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:13.599 [2024-11-26 17:41:51.040331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.318 ms 00:45:13.599 [2024-11-26 17:41:51.040346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:51.040523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:51.040541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:13.599 [2024-11-26 17:41:51.040551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:13.599 [2024-11-26 17:41:51.040564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:51.040652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:51.040669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:13.599 [2024-11-26 17:41:51.040679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:45:13.599 [2024-11-26 17:41:51.040698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:51.040726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:51.040740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:13.599 [2024-11-26 17:41:51.040748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:13.599 [2024-11-26 17:41:51.040761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.599 [2024-11-26 17:41:51.040809] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:13.599 [2024-11-26 17:41:51.040874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.599 [2024-11-26 17:41:51.040893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:13.599 [2024-11-26 17:41:51.040923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:45:13.599 [2024-11-26 17:41:51.040936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.857 [2024-11-26 17:41:51.081408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.857 [2024-11-26 17:41:51.081467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:13.857 [2024-11-26 17:41:51.081490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.506 ms 00:45:13.858 [2024-11-26 17:41:51.081502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.858 [2024-11-26 17:41:51.081684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:13.858 [2024-11-26 17:41:51.081699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:13.858 [2024-11-26 17:41:51.081722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:45:13.858 [2024-11-26 17:41:51.081732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:13.858 [2024-11-26 17:41:51.083286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:13.858 [2024-11-26 17:41:51.089331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 474.627 ms, result 0 00:45:13.858 [2024-11-26 17:41:51.090496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:13.858 Some configs were skipped because the RPC state that can call them passed over. 00:45:13.858 17:41:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:45:14.115 [2024-11-26 17:41:51.357317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:14.115 [2024-11-26 17:41:51.357413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:45:14.115 [2024-11-26 17:41:51.357448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:45:14.115 [2024-11-26 17:41:51.357462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:14.115 [2024-11-26 17:41:51.357507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.753 ms, result 0 00:45:14.115 true 00:45:14.115 17:41:51 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:45:14.374 [2024-11-26 17:41:51.580808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:14.374 [2024-11-26 17:41:51.580882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:45:14.374 [2024-11-26 17:41:51.580903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:45:14.374 [2024-11-26 17:41:51.580914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:14.374 [2024-11-26 17:41:51.580964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.398 ms, result 0 00:45:14.374 true 00:45:14.374 17:41:51 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79537 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79537 ']' 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79537 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79537 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:14.374 killing process with pid 79537 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79537' 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79537 00:45:14.374 17:41:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79537 00:45:15.752 [2024-11-26 17:41:52.947112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.947185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:15.752 [2024-11-26 17:41:52.947200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:15.752 [2024-11-26 17:41:52.947211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.947254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:15.752 [2024-11-26 17:41:52.952845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.952880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:15.752 [2024-11-26 17:41:52.952899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.580 ms 00:45:15.752 [2024-11-26 17:41:52.952908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.953244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.953258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:15.752 [2024-11-26 17:41:52.953271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:45:15.752 [2024-11-26 17:41:52.953281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.956901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.956989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:15.752 [2024-11-26 17:41:52.957008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:45:15.752 [2024-11-26 17:41:52.957017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.962740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.962776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:15.752 [2024-11-26 17:41:52.962788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.664 ms 00:45:15.752 [2024-11-26 17:41:52.962796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.978726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.978777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:15.752 [2024-11-26 17:41:52.978795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.891 ms 00:45:15.752 [2024-11-26 17:41:52.978803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.990198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.990301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:15.752 [2024-11-26 17:41:52.990325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.340 ms 00:45:15.752 [2024-11-26 17:41:52.990335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:52.990517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:52.990531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:15.752 [2024-11-26 17:41:52.990545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:45:15.752 [2024-11-26 17:41:52.990554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:53.008792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:53.008871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:15.752 [2024-11-26 17:41:53.008890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.245 ms 00:45:15.752 [2024-11-26 17:41:53.008899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:53.025150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:53.025182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:15.752 [2024-11-26 17:41:53.025199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.223 ms 00:45:15.752 [2024-11-26 17:41:53.025207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:53.041861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:53.041902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:15.752 [2024-11-26 17:41:53.041919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.627 ms 00:45:15.752 [2024-11-26 17:41:53.041928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:53.059233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.752 [2024-11-26 17:41:53.059272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:15.752 [2024-11-26 17:41:53.059288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.236 ms 00:45:15.752 [2024-11-26 17:41:53.059297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.752 [2024-11-26 17:41:53.059357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:15.752 [2024-11-26 17:41:53.059378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:15.752 [2024-11-26 17:41:53.059948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.059959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.059974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.059984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.059999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:15.753 [2024-11-26 17:41:53.060718] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:15.753 [2024-11-26 17:41:53.060740] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:45:15.753 [2024-11-26 17:41:53.060756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:15.753 [2024-11-26 17:41:53.060770] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:15.753 [2024-11-26 17:41:53.060780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:15.753 [2024-11-26 17:41:53.060794] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:15.753 [2024-11-26 17:41:53.060803] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:15.753 [2024-11-26 17:41:53.060818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:15.753 [2024-11-26 17:41:53.060826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:15.753 [2024-11-26 17:41:53.060839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:15.753 [2024-11-26 17:41:53.060847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:15.753 [2024-11-26 17:41:53.060862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.753 [2024-11-26 17:41:53.060871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:15.753 [2024-11-26 17:41:53.060887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:45:15.753 [2024-11-26 17:41:53.060902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.753 [2024-11-26 17:41:53.083492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.753 [2024-11-26 17:41:53.083527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:15.753 [2024-11-26 17:41:53.083549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:45:15.753 [2024-11-26 17:41:53.083557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.753 [2024-11-26 17:41:53.084253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.753 [2024-11-26 17:41:53.084274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:15.753 [2024-11-26 17:41:53.084294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:45:15.753 [2024-11-26 17:41:53.084303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.753 [2024-11-26 17:41:53.161956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:15.753 [2024-11-26 17:41:53.162099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:15.753 [2024-11-26 17:41:53.162123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:15.753 [2024-11-26 17:41:53.162134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.753 [2024-11-26 17:41:53.162297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:15.753 [2024-11-26 17:41:53.162308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:15.753 [2024-11-26 17:41:53.162328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:15.753 [2024-11-26 17:41:53.162337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.753 [2024-11-26 17:41:53.162410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:15.753 [2024-11-26 17:41:53.162423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:15.753 [2024-11-26 17:41:53.162443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:15.753 [2024-11-26 17:41:53.162452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.753 [2024-11-26 17:41:53.162480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:15.753 [2024-11-26 17:41:53.162489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:15.753 [2024-11-26 17:41:53.162502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:15.753 [2024-11-26 17:41:53.162514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.012 [2024-11-26 17:41:53.310854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.012 [2024-11-26 17:41:53.310936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:16.012 [2024-11-26 17:41:53.310958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.012 [2024-11-26 17:41:53.310968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.012 [2024-11-26 17:41:53.436157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.012 [2024-11-26 17:41:53.436237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:16.012 [2024-11-26 17:41:53.436264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.012 [2024-11-26 17:41:53.436274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.012 [2024-11-26 17:41:53.436420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.013 [2024-11-26 17:41:53.436432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:16.013 [2024-11-26 17:41:53.436451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.013 [2024-11-26 17:41:53.436459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.013 [2024-11-26 17:41:53.436496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.013 [2024-11-26 17:41:53.436505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:16.013 [2024-11-26 17:41:53.436518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.013 [2024-11-26 17:41:53.436526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.013 [2024-11-26 17:41:53.436700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.013 [2024-11-26 17:41:53.436716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:16.013 [2024-11-26 17:41:53.436733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.013 [2024-11-26 17:41:53.436742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.013 [2024-11-26 17:41:53.436801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.013 [2024-11-26 17:41:53.436813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:16.013 [2024-11-26 17:41:53.436830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.013 [2024-11-26 17:41:53.436840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.013 [2024-11-26 17:41:53.436900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.013 [2024-11-26 17:41:53.436910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:16.013 [2024-11-26 17:41:53.436930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.013 [2024-11-26 17:41:53.436938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.013 [2024-11-26 17:41:53.436998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.013 [2024-11-26 17:41:53.437009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:16.013 [2024-11-26 17:41:53.437024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.013 [2024-11-26 17:41:53.437033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.013 [2024-11-26 17:41:53.437228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 491.022 ms, result 0 00:45:17.385 17:41:54 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:17.385 [2024-11-26 17:41:54.707389] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:45:17.385 [2024-11-26 17:41:54.707659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79608 ] 00:45:17.643 [2024-11-26 17:41:54.889245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.643 [2024-11-26 17:41:55.032273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:18.210 [2024-11-26 17:41:55.456338] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:18.210 [2024-11-26 17:41:55.456515] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:18.210 [2024-11-26 17:41:55.618551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.210 [2024-11-26 17:41:55.618739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:18.210 [2024-11-26 17:41:55.618777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:18.210 [2024-11-26 17:41:55.618799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.210 [2024-11-26 17:41:55.622225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.210 [2024-11-26 17:41:55.622308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:18.210 [2024-11-26 17:41:55.622337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.398 ms 00:45:18.210 [2024-11-26 17:41:55.622358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.210 [2024-11-26 17:41:55.622467] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:18.210 [2024-11-26 17:41:55.623553] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:18.210 [2024-11-26 17:41:55.623642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.210 [2024-11-26 17:41:55.623680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:18.210 [2024-11-26 17:41:55.623714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:45:18.210 [2024-11-26 17:41:55.623742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.210 [2024-11-26 17:41:55.626349] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:18.210 [2024-11-26 17:41:55.647026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.210 [2024-11-26 17:41:55.647115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:18.210 [2024-11-26 17:41:55.647161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.718 ms 00:45:18.210 [2024-11-26 17:41:55.647182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.210 [2024-11-26 17:41:55.647290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.210 [2024-11-26 17:41:55.647334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:18.210 [2024-11-26 17:41:55.647362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:45:18.210 [2024-11-26 17:41:55.647388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.660267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.660330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:18.470 [2024-11-26 17:41:55.660379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.839 ms 00:45:18.470 [2024-11-26 17:41:55.660400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.660546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.660591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:18.470 [2024-11-26 17:41:55.660637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:45:18.470 [2024-11-26 17:41:55.660674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.660735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.660747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:18.470 [2024-11-26 17:41:55.660756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:18.470 [2024-11-26 17:41:55.660765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.660792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:18.470 [2024-11-26 17:41:55.666794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.666824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:18.470 [2024-11-26 17:41:55.666835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.023 ms 00:45:18.470 [2024-11-26 17:41:55.666842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.666903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.666913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:18.470 [2024-11-26 17:41:55.666921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:18.470 [2024-11-26 17:41:55.666929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.666953] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:18.470 [2024-11-26 17:41:55.666975] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:18.470 [2024-11-26 17:41:55.667009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:18.470 [2024-11-26 17:41:55.667025] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:18.470 [2024-11-26 17:41:55.667116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:18.470 [2024-11-26 17:41:55.667127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:18.470 [2024-11-26 17:41:55.667137] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:18.470 [2024-11-26 17:41:55.667151] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667161] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667170] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:18.470 [2024-11-26 17:41:55.667179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:18.470 [2024-11-26 17:41:55.667187] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:18.470 [2024-11-26 17:41:55.667194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:18.470 [2024-11-26 17:41:55.667202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.667210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:18.470 [2024-11-26 17:41:55.667218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:45:18.470 [2024-11-26 17:41:55.667225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.667296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.470 [2024-11-26 17:41:55.667316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:18.470 [2024-11-26 17:41:55.667323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:45:18.470 [2024-11-26 17:41:55.667330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.470 [2024-11-26 17:41:55.667437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:18.470 [2024-11-26 17:41:55.667448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:18.470 [2024-11-26 17:41:55.667456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:18.470 [2024-11-26 17:41:55.667479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:18.470 [2024-11-26 17:41:55.667501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:18.470 [2024-11-26 17:41:55.667516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:18.470 [2024-11-26 17:41:55.667537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:18.470 [2024-11-26 17:41:55.667545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:18.470 [2024-11-26 17:41:55.667552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:18.470 [2024-11-26 17:41:55.667559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:18.470 [2024-11-26 17:41:55.667566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:18.470 [2024-11-26 17:41:55.667579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:18.470 [2024-11-26 17:41:55.667600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:18.470 [2024-11-26 17:41:55.667619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:18.470 [2024-11-26 17:41:55.667655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:18.470 [2024-11-26 17:41:55.667675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:18.470 [2024-11-26 17:41:55.667695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:18.470 [2024-11-26 17:41:55.667708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:18.470 [2024-11-26 17:41:55.667715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:18.470 [2024-11-26 17:41:55.667721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:18.470 [2024-11-26 17:41:55.667728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:18.470 [2024-11-26 17:41:55.667734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:18.470 [2024-11-26 17:41:55.667741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:18.470 [2024-11-26 17:41:55.667755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:18.470 [2024-11-26 17:41:55.667765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:18.470 [2024-11-26 17:41:55.667780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:18.470 [2024-11-26 17:41:55.667791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:18.470 [2024-11-26 17:41:55.667807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:18.470 [2024-11-26 17:41:55.667814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:18.470 [2024-11-26 17:41:55.667821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:18.470 [2024-11-26 17:41:55.667828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:18.470 [2024-11-26 17:41:55.667834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:18.470 [2024-11-26 17:41:55.667841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:18.470 [2024-11-26 17:41:55.667850] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:18.471 [2024-11-26 17:41:55.667860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:18.471 [2024-11-26 17:41:55.667875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:18.471 [2024-11-26 17:41:55.667882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:18.471 [2024-11-26 17:41:55.667889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:18.471 [2024-11-26 17:41:55.667896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:18.471 [2024-11-26 17:41:55.667903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:18.471 [2024-11-26 17:41:55.667910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:18.471 [2024-11-26 17:41:55.667917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:18.471 [2024-11-26 17:41:55.667925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:18.471 [2024-11-26 17:41:55.667932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:18.471 [2024-11-26 17:41:55.667970] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:18.471 [2024-11-26 17:41:55.667978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:18.471 [2024-11-26 17:41:55.667995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:18.471 [2024-11-26 17:41:55.668002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:18.471 [2024-11-26 17:41:55.668011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:18.471 [2024-11-26 17:41:55.668019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.668032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:18.471 [2024-11-26 17:41:55.668040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:45:18.471 [2024-11-26 17:41:55.668048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.717560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.717705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:18.471 [2024-11-26 17:41:55.717743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.540 ms 00:45:18.471 [2024-11-26 17:41:55.717766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.718013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.718057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:18.471 [2024-11-26 17:41:55.718090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:45:18.471 [2024-11-26 17:41:55.718124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.785544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.785688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:18.471 [2024-11-26 17:41:55.785722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.494 ms 00:45:18.471 [2024-11-26 17:41:55.785745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.785903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.785956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:18.471 [2024-11-26 17:41:55.785986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:18.471 [2024-11-26 17:41:55.786015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.786821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.786867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:18.471 [2024-11-26 17:41:55.786906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:45:18.471 [2024-11-26 17:41:55.786934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.787102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.787141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:18.471 [2024-11-26 17:41:55.787170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:45:18.471 [2024-11-26 17:41:55.787198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.811213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.811330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:18.471 [2024-11-26 17:41:55.811361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.012 ms 00:45:18.471 [2024-11-26 17:41:55.811383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.832475] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:18.471 [2024-11-26 17:41:55.832574] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:18.471 [2024-11-26 17:41:55.832618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.832641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:18.471 [2024-11-26 17:41:55.832662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.097 ms 00:45:18.471 [2024-11-26 17:41:55.832680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.863015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.863124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:18.471 [2024-11-26 17:41:55.863156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.291 ms 00:45:18.471 [2024-11-26 17:41:55.863177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.882063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.882143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:18.471 [2024-11-26 17:41:55.882172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.823 ms 00:45:18.471 [2024-11-26 17:41:55.882193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.900025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.900100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:18.471 [2024-11-26 17:41:55.900127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.776 ms 00:45:18.471 [2024-11-26 17:41:55.900148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.471 [2024-11-26 17:41:55.901018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.471 [2024-11-26 17:41:55.901082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:18.471 [2024-11-26 17:41:55.901117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:45:18.471 [2024-11-26 17:41:55.901138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.003078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.003207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:18.730 [2024-11-26 17:41:56.003242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.075 ms 00:45:18.730 [2024-11-26 17:41:56.003264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.015466] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:18.730 [2024-11-26 17:41:56.043938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.044072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:18.730 [2024-11-26 17:41:56.044115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.562 ms 00:45:18.730 [2024-11-26 17:41:56.044138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.044336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.044379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:18.730 [2024-11-26 17:41:56.044410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:18.730 [2024-11-26 17:41:56.044456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.044550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.044582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:18.730 [2024-11-26 17:41:56.044638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:45:18.730 [2024-11-26 17:41:56.044673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.044742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.044783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:18.730 [2024-11-26 17:41:56.044812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:45:18.730 [2024-11-26 17:41:56.044840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.044910] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:18.730 [2024-11-26 17:41:56.044943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.044973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:18.730 [2024-11-26 17:41:56.044999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:45:18.730 [2024-11-26 17:41:56.045021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.083081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.083177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:18.730 [2024-11-26 17:41:56.083210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.088 ms 00:45:18.730 [2024-11-26 17:41:56.083231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.083396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:18.730 [2024-11-26 17:41:56.083451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:18.730 [2024-11-26 17:41:56.083485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:45:18.730 [2024-11-26 17:41:56.083526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:18.730 [2024-11-26 17:41:56.084994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:18.730 [2024-11-26 17:41:56.090161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 466.924 ms, result 0 00:45:18.730 [2024-11-26 17:41:56.091099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:18.730 [2024-11-26 17:41:56.110823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:20.104  [2024-11-26T17:41:58.488Z] Copying: 36/256 [MB] (36 MBps) [2024-11-26T17:41:59.451Z] Copying: 68/256 [MB] (32 MBps) [2024-11-26T17:42:00.389Z] Copying: 97/256 [MB] (28 MBps) [2024-11-26T17:42:01.328Z] Copying: 126/256 [MB] (29 MBps) [2024-11-26T17:42:02.265Z] Copying: 156/256 [MB] (29 MBps) [2024-11-26T17:42:03.202Z] Copying: 184/256 [MB] (28 MBps) [2024-11-26T17:42:04.579Z] Copying: 213/256 [MB] (28 MBps) [2024-11-26T17:42:04.841Z] Copying: 241/256 [MB] (28 MBps) [2024-11-26T17:42:05.105Z] Copying: 256/256 [MB] (average 30 MBps)[2024-11-26 17:42:04.977606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:27.659 [2024-11-26 17:42:04.999110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:04.999282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:27.659 [2024-11-26 17:42:04.999330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:27.659 [2024-11-26 17:42:04.999357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:04.999466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:27.659 [2024-11-26 17:42:05.004452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.004551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:27.659 [2024-11-26 17:42:05.004583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:45:27.659 [2024-11-26 17:42:05.004620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:05.004990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.005033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:27.659 [2024-11-26 17:42:05.005064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:45:27.659 [2024-11-26 17:42:05.005088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:05.008410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.008462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:27.659 [2024-11-26 17:42:05.008489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:45:27.659 [2024-11-26 17:42:05.008511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:05.014751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.014818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:27.659 [2024-11-26 17:42:05.014847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:45:27.659 [2024-11-26 17:42:05.014868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:05.053554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.053653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:27.659 [2024-11-26 17:42:05.053700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.659 ms 00:45:27.659 [2024-11-26 17:42:05.053722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:05.075630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.075721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:27.659 [2024-11-26 17:42:05.075768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.880 ms 00:45:27.659 [2024-11-26 17:42:05.075789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.659 [2024-11-26 17:42:05.075955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.659 [2024-11-26 17:42:05.075997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:27.659 [2024-11-26 17:42:05.076047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:45:27.659 [2024-11-26 17:42:05.076084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.919 [2024-11-26 17:42:05.113061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.919 [2024-11-26 17:42:05.113164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:27.919 [2024-11-26 17:42:05.113179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.999 ms 00:45:27.919 [2024-11-26 17:42:05.113203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.919 [2024-11-26 17:42:05.149022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.919 [2024-11-26 17:42:05.149061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:27.919 [2024-11-26 17:42:05.149072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.836 ms 00:45:27.919 [2024-11-26 17:42:05.149079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.919 [2024-11-26 17:42:05.184133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.919 [2024-11-26 17:42:05.184169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:27.919 [2024-11-26 17:42:05.184180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.068 ms 00:45:27.919 [2024-11-26 17:42:05.184203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.919 [2024-11-26 17:42:05.219191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.919 [2024-11-26 17:42:05.219308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:27.919 [2024-11-26 17:42:05.219324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.950 ms 00:45:27.919 [2024-11-26 17:42:05.219332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.919 [2024-11-26 17:42:05.219383] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:27.919 [2024-11-26 17:42:05.219399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:27.919 [2024-11-26 17:42:05.219459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.219995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:27.920 [2024-11-26 17:42:05.220111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:27.921 [2024-11-26 17:42:05.220204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:27.921 [2024-11-26 17:42:05.220212] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fd46404a-df59-44ee-8649-89998e51716e 00:45:27.921 [2024-11-26 17:42:05.220220] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:27.921 [2024-11-26 17:42:05.220227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:27.921 [2024-11-26 17:42:05.220235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:27.921 [2024-11-26 17:42:05.220243] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:27.921 [2024-11-26 17:42:05.220250] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:27.921 [2024-11-26 17:42:05.220263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:27.921 [2024-11-26 17:42:05.220271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:27.921 [2024-11-26 17:42:05.220278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:27.921 [2024-11-26 17:42:05.220284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:27.921 [2024-11-26 17:42:05.220292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.921 [2024-11-26 17:42:05.220301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:27.921 [2024-11-26 17:42:05.220310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:45:27.921 [2024-11-26 17:42:05.220317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.921 [2024-11-26 17:42:05.241079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.921 [2024-11-26 17:42:05.241112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:27.921 [2024-11-26 17:42:05.241123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.781 ms 00:45:27.921 [2024-11-26 17:42:05.241138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.921 [2024-11-26 17:42:05.241745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.921 [2024-11-26 17:42:05.241757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:27.921 [2024-11-26 17:42:05.241765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:45:27.921 [2024-11-26 17:42:05.241773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.921 [2024-11-26 17:42:05.300197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:27.921 [2024-11-26 17:42:05.300254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:27.921 [2024-11-26 17:42:05.300275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:27.921 [2024-11-26 17:42:05.300283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.921 [2024-11-26 17:42:05.300408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:27.921 [2024-11-26 17:42:05.300417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:27.921 [2024-11-26 17:42:05.300425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:27.921 [2024-11-26 17:42:05.300438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.921 [2024-11-26 17:42:05.300502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:27.921 [2024-11-26 17:42:05.300514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:27.921 [2024-11-26 17:42:05.300522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:27.921 [2024-11-26 17:42:05.300535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.921 [2024-11-26 17:42:05.300556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:27.921 [2024-11-26 17:42:05.300565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:27.921 [2024-11-26 17:42:05.300572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:27.921 [2024-11-26 17:42:05.300580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.440720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.440894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:28.181 [2024-11-26 17:42:05.440911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.440928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:28.181 [2024-11-26 17:42:05.548116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:28.181 [2024-11-26 17:42:05.548255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:28.181 [2024-11-26 17:42:05.548315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:28.181 [2024-11-26 17:42:05.548457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:28.181 [2024-11-26 17:42:05.548525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:28.181 [2024-11-26 17:42:05.548597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:28.181 [2024-11-26 17:42:05.548704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:28.181 [2024-11-26 17:42:05.548713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:28.181 [2024-11-26 17:42:05.548721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:28.181 [2024-11-26 17:42:05.548886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.849 ms, result 0 00:45:29.562 00:45:29.562 00:45:29.562 17:42:06 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:29.821 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:45:29.821 17:42:07 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:45:29.821 17:42:07 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:45:29.821 17:42:07 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:29.821 17:42:07 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:29.821 17:42:07 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:45:29.821 17:42:07 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:45:30.081 17:42:07 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79537 00:45:30.081 Process with pid 79537 is not found 00:45:30.081 17:42:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79537 ']' 00:45:30.081 17:42:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79537 00:45:30.081 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79537) - No such process 00:45:30.081 17:42:07 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79537 is not found' 00:45:30.081 ************************************ 00:45:30.081 END TEST ftl_trim 00:45:30.081 ************************************ 00:45:30.081 00:45:30.081 real 1m13.424s 00:45:30.081 user 1m49.448s 00:45:30.081 sys 0m8.161s 00:45:30.081 17:42:07 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:30.081 17:42:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 17:42:07 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:45:30.081 17:42:07 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:45:30.081 17:42:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:30.081 17:42:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 ************************************ 00:45:30.081 START TEST ftl_restore 00:45:30.081 ************************************ 00:45:30.081 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:45:30.081 * Looking for test storage... 00:45:30.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:45:30.081 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:30.081 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:45:30.081 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:30.341 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:45:30.341 17:42:07 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:45:30.342 17:42:07 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:30.342 17:42:07 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:30.342 17:42:07 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:30.342 --rc genhtml_branch_coverage=1 00:45:30.342 --rc genhtml_function_coverage=1 00:45:30.342 --rc genhtml_legend=1 00:45:30.342 --rc geninfo_all_blocks=1 00:45:30.342 --rc geninfo_unexecuted_blocks=1 00:45:30.342 00:45:30.342 ' 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:30.342 --rc genhtml_branch_coverage=1 00:45:30.342 --rc genhtml_function_coverage=1 00:45:30.342 --rc genhtml_legend=1 00:45:30.342 --rc geninfo_all_blocks=1 00:45:30.342 --rc geninfo_unexecuted_blocks=1 00:45:30.342 00:45:30.342 ' 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:30.342 --rc genhtml_branch_coverage=1 00:45:30.342 --rc genhtml_function_coverage=1 00:45:30.342 --rc genhtml_legend=1 00:45:30.342 --rc geninfo_all_blocks=1 00:45:30.342 --rc geninfo_unexecuted_blocks=1 00:45:30.342 00:45:30.342 ' 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:30.342 --rc genhtml_branch_coverage=1 00:45:30.342 --rc genhtml_function_coverage=1 00:45:30.342 --rc genhtml_legend=1 00:45:30.342 --rc geninfo_all_blocks=1 00:45:30.342 --rc geninfo_unexecuted_blocks=1 00:45:30.342 00:45:30.342 ' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.OFaPSkQ4CR 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79808 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:30.342 17:42:07 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79808 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79808 ']' 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:30.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:30.342 17:42:07 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:45:30.342 [2024-11-26 17:42:07.736167] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:45:30.342 [2024-11-26 17:42:07.736424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79808 ] 00:45:30.602 [2024-11-26 17:42:07.919134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:30.861 [2024-11-26 17:42:08.060956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:31.796 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:31.796 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:45:31.797 17:42:09 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:45:31.797 17:42:09 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:45:31.797 17:42:09 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:45:31.797 17:42:09 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:45:31.797 17:42:09 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:45:31.797 17:42:09 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:32.055 17:42:09 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:45:32.055 17:42:09 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:45:32.055 17:42:09 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:45:32.055 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:45:32.055 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:32.055 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:32.055 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:32.055 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:45:32.312 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:32.312 { 00:45:32.312 "name": "nvme0n1", 00:45:32.312 "aliases": [ 00:45:32.313 "03af77fe-1af4-4527-b9b7-01fbcbd09fb6" 00:45:32.313 ], 00:45:32.313 "product_name": "NVMe disk", 00:45:32.313 "block_size": 4096, 00:45:32.313 "num_blocks": 1310720, 00:45:32.313 "uuid": "03af77fe-1af4-4527-b9b7-01fbcbd09fb6", 00:45:32.313 "numa_id": -1, 00:45:32.313 "assigned_rate_limits": { 00:45:32.313 "rw_ios_per_sec": 0, 00:45:32.313 "rw_mbytes_per_sec": 0, 00:45:32.313 "r_mbytes_per_sec": 0, 00:45:32.313 "w_mbytes_per_sec": 0 00:45:32.313 }, 00:45:32.313 "claimed": true, 00:45:32.313 "claim_type": "read_many_write_one", 00:45:32.313 "zoned": false, 00:45:32.313 "supported_io_types": { 00:45:32.313 "read": true, 00:45:32.313 "write": true, 00:45:32.313 "unmap": true, 00:45:32.313 "flush": true, 00:45:32.313 "reset": true, 00:45:32.313 "nvme_admin": true, 00:45:32.313 "nvme_io": true, 00:45:32.313 "nvme_io_md": false, 00:45:32.313 "write_zeroes": true, 00:45:32.313 "zcopy": false, 00:45:32.313 "get_zone_info": false, 00:45:32.313 "zone_management": false, 00:45:32.313 "zone_append": false, 00:45:32.313 "compare": true, 00:45:32.313 "compare_and_write": false, 00:45:32.313 "abort": true, 00:45:32.313 "seek_hole": false, 00:45:32.313 "seek_data": false, 00:45:32.313 "copy": true, 00:45:32.313 "nvme_iov_md": false 00:45:32.313 }, 00:45:32.313 "driver_specific": { 00:45:32.313 "nvme": [ 00:45:32.313 { 00:45:32.313 "pci_address": "0000:00:11.0", 00:45:32.313 "trid": { 00:45:32.313 "trtype": "PCIe", 00:45:32.313 "traddr": "0000:00:11.0" 00:45:32.313 }, 00:45:32.313 "ctrlr_data": { 00:45:32.313 "cntlid": 0, 00:45:32.313 "vendor_id": "0x1b36", 00:45:32.313 "model_number": "QEMU NVMe Ctrl", 00:45:32.313 "serial_number": "12341", 00:45:32.313 "firmware_revision": "8.0.0", 00:45:32.313 "subnqn": "nqn.2019-08.org.qemu:12341", 00:45:32.313 "oacs": { 00:45:32.313 "security": 0, 00:45:32.313 "format": 1, 00:45:32.313 "firmware": 0, 00:45:32.313 "ns_manage": 1 00:45:32.313 }, 00:45:32.313 "multi_ctrlr": false, 00:45:32.313 "ana_reporting": false 00:45:32.313 }, 00:45:32.313 "vs": { 00:45:32.313 "nvme_version": "1.4" 00:45:32.313 }, 00:45:32.313 "ns_data": { 00:45:32.313 "id": 1, 00:45:32.313 "can_share": false 00:45:32.313 } 00:45:32.313 } 00:45:32.313 ], 00:45:32.313 "mp_policy": "active_passive" 00:45:32.313 } 00:45:32.313 } 00:45:32.313 ]' 00:45:32.313 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:32.313 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:32.313 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:32.313 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:45:32.313 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:45:32.313 17:42:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:45:32.313 17:42:09 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:45:32.313 17:42:09 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:45:32.313 17:42:09 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:45:32.313 17:42:09 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:32.313 17:42:09 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:32.595 17:42:09 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=31f8e687-f521-41e3-b641-bdd1611361cf 00:45:32.595 17:42:09 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:45:32.595 17:42:09 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31f8e687-f521-41e3-b641-bdd1611361cf 00:45:32.856 17:42:10 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:45:33.117 17:42:10 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a28a8876-e3e7-4f64-b2ac-bd89e608fb88 00:45:33.117 17:42:10 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a28a8876-e3e7-4f64-b2ac-bd89e608fb88 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:45:33.378 17:42:10 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.378 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.378 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:33.378 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:33.378 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:33.378 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.637 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:33.637 { 00:45:33.637 "name": "06762445-6f10-4e8e-8837-f5d34939099a", 00:45:33.637 "aliases": [ 00:45:33.637 "lvs/nvme0n1p0" 00:45:33.637 ], 00:45:33.637 "product_name": "Logical Volume", 00:45:33.637 "block_size": 4096, 00:45:33.637 "num_blocks": 26476544, 00:45:33.637 "uuid": "06762445-6f10-4e8e-8837-f5d34939099a", 00:45:33.637 "assigned_rate_limits": { 00:45:33.637 "rw_ios_per_sec": 0, 00:45:33.637 "rw_mbytes_per_sec": 0, 00:45:33.637 "r_mbytes_per_sec": 0, 00:45:33.637 "w_mbytes_per_sec": 0 00:45:33.637 }, 00:45:33.637 "claimed": false, 00:45:33.637 "zoned": false, 00:45:33.637 "supported_io_types": { 00:45:33.637 "read": true, 00:45:33.637 "write": true, 00:45:33.637 "unmap": true, 00:45:33.637 "flush": false, 00:45:33.637 "reset": true, 00:45:33.637 "nvme_admin": false, 00:45:33.637 "nvme_io": false, 00:45:33.637 "nvme_io_md": false, 00:45:33.637 "write_zeroes": true, 00:45:33.637 "zcopy": false, 00:45:33.637 "get_zone_info": false, 00:45:33.637 "zone_management": false, 00:45:33.637 "zone_append": false, 00:45:33.637 "compare": false, 00:45:33.637 "compare_and_write": false, 00:45:33.637 "abort": false, 00:45:33.637 "seek_hole": true, 00:45:33.637 "seek_data": true, 00:45:33.637 "copy": false, 00:45:33.637 "nvme_iov_md": false 00:45:33.637 }, 00:45:33.637 "driver_specific": { 00:45:33.637 "lvol": { 00:45:33.637 "lvol_store_uuid": "a28a8876-e3e7-4f64-b2ac-bd89e608fb88", 00:45:33.637 "base_bdev": "nvme0n1", 00:45:33.637 "thin_provision": true, 00:45:33.637 "num_allocated_clusters": 0, 00:45:33.637 "snapshot": false, 00:45:33.637 "clone": false, 00:45:33.637 "esnap_clone": false 00:45:33.637 } 00:45:33.637 } 00:45:33.637 } 00:45:33.637 ]' 00:45:33.637 17:42:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:33.637 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:33.637 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:33.637 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:33.637 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:33.637 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:45:33.637 17:42:11 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:45:33.637 17:42:11 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:45:33.637 17:42:11 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:45:33.896 17:42:11 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:45:33.896 17:42:11 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:45:33.896 17:42:11 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.896 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=06762445-6f10-4e8e-8837-f5d34939099a 00:45:33.896 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:33.896 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:33.896 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:33.896 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06762445-6f10-4e8e-8837-f5d34939099a 00:45:34.154 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:34.154 { 00:45:34.154 "name": "06762445-6f10-4e8e-8837-f5d34939099a", 00:45:34.154 "aliases": [ 00:45:34.154 "lvs/nvme0n1p0" 00:45:34.154 ], 00:45:34.154 "product_name": "Logical Volume", 00:45:34.154 "block_size": 4096, 00:45:34.154 "num_blocks": 26476544, 00:45:34.154 "uuid": "06762445-6f10-4e8e-8837-f5d34939099a", 00:45:34.154 "assigned_rate_limits": { 00:45:34.154 "rw_ios_per_sec": 0, 00:45:34.154 "rw_mbytes_per_sec": 0, 00:45:34.154 "r_mbytes_per_sec": 0, 00:45:34.154 "w_mbytes_per_sec": 0 00:45:34.154 }, 00:45:34.154 "claimed": false, 00:45:34.154 "zoned": false, 00:45:34.154 "supported_io_types": { 00:45:34.154 "read": true, 00:45:34.154 "write": true, 00:45:34.154 "unmap": true, 00:45:34.154 "flush": false, 00:45:34.154 "reset": true, 00:45:34.154 "nvme_admin": false, 00:45:34.154 "nvme_io": false, 00:45:34.155 "nvme_io_md": false, 00:45:34.155 "write_zeroes": true, 00:45:34.155 "zcopy": false, 00:45:34.155 "get_zone_info": false, 00:45:34.155 "zone_management": false, 00:45:34.155 "zone_append": false, 00:45:34.155 "compare": false, 00:45:34.155 "compare_and_write": false, 00:45:34.155 "abort": false, 00:45:34.155 "seek_hole": true, 00:45:34.155 "seek_data": true, 00:45:34.155 "copy": false, 00:45:34.155 "nvme_iov_md": false 00:45:34.155 }, 00:45:34.155 "driver_specific": { 00:45:34.155 "lvol": { 00:45:34.155 "lvol_store_uuid": "a28a8876-e3e7-4f64-b2ac-bd89e608fb88", 00:45:34.155 "base_bdev": "nvme0n1", 00:45:34.155 "thin_provision": true, 00:45:34.155 "num_allocated_clusters": 0, 00:45:34.155 "snapshot": false, 00:45:34.155 "clone": false, 00:45:34.155 "esnap_clone": false 00:45:34.155 } 00:45:34.155 } 00:45:34.155 } 00:45:34.155 ]' 00:45:34.155 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:34.412 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:34.412 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:34.412 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:34.412 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:34.412 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:45:34.412 17:42:11 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:45:34.412 17:42:11 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:45:34.671 17:42:11 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:45:34.671 17:42:11 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 06762445-6f10-4e8e-8837-f5d34939099a 00:45:34.671 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=06762445-6f10-4e8e-8837-f5d34939099a 00:45:34.671 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:34.671 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:34.671 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:34.671 17:42:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06762445-6f10-4e8e-8837-f5d34939099a 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:34.929 { 00:45:34.929 "name": "06762445-6f10-4e8e-8837-f5d34939099a", 00:45:34.929 "aliases": [ 00:45:34.929 "lvs/nvme0n1p0" 00:45:34.929 ], 00:45:34.929 "product_name": "Logical Volume", 00:45:34.929 "block_size": 4096, 00:45:34.929 "num_blocks": 26476544, 00:45:34.929 "uuid": "06762445-6f10-4e8e-8837-f5d34939099a", 00:45:34.929 "assigned_rate_limits": { 00:45:34.929 "rw_ios_per_sec": 0, 00:45:34.929 "rw_mbytes_per_sec": 0, 00:45:34.929 "r_mbytes_per_sec": 0, 00:45:34.929 "w_mbytes_per_sec": 0 00:45:34.929 }, 00:45:34.929 "claimed": false, 00:45:34.929 "zoned": false, 00:45:34.929 "supported_io_types": { 00:45:34.929 "read": true, 00:45:34.929 "write": true, 00:45:34.929 "unmap": true, 00:45:34.929 "flush": false, 00:45:34.929 "reset": true, 00:45:34.929 "nvme_admin": false, 00:45:34.929 "nvme_io": false, 00:45:34.929 "nvme_io_md": false, 00:45:34.929 "write_zeroes": true, 00:45:34.929 "zcopy": false, 00:45:34.929 "get_zone_info": false, 00:45:34.929 "zone_management": false, 00:45:34.929 "zone_append": false, 00:45:34.929 "compare": false, 00:45:34.929 "compare_and_write": false, 00:45:34.929 "abort": false, 00:45:34.929 "seek_hole": true, 00:45:34.929 "seek_data": true, 00:45:34.929 "copy": false, 00:45:34.929 "nvme_iov_md": false 00:45:34.929 }, 00:45:34.929 "driver_specific": { 00:45:34.929 "lvol": { 00:45:34.929 "lvol_store_uuid": "a28a8876-e3e7-4f64-b2ac-bd89e608fb88", 00:45:34.929 "base_bdev": "nvme0n1", 00:45:34.929 "thin_provision": true, 00:45:34.929 "num_allocated_clusters": 0, 00:45:34.929 "snapshot": false, 00:45:34.929 "clone": false, 00:45:34.929 "esnap_clone": false 00:45:34.929 } 00:45:34.929 } 00:45:34.929 } 00:45:34.929 ]' 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:34.929 17:42:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 06762445-6f10-4e8e-8837-f5d34939099a --l2p_dram_limit 10' 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:45:34.929 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:45:34.929 17:42:12 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 06762445-6f10-4e8e-8837-f5d34939099a --l2p_dram_limit 10 -c nvc0n1p0 00:45:35.189 [2024-11-26 17:42:12.506996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.507184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:35.189 [2024-11-26 17:42:12.507226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:35.189 [2024-11-26 17:42:12.507248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.507369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.507401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:35.189 [2024-11-26 17:42:12.507443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:45:35.189 [2024-11-26 17:42:12.507472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.507514] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:35.189 [2024-11-26 17:42:12.508688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:35.189 [2024-11-26 17:42:12.508770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.508812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:35.189 [2024-11-26 17:42:12.508845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:45:35.189 [2024-11-26 17:42:12.508873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.508991] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 577621b7-83e5-45f3-93b3-fbcbcb8e7851 00:45:35.189 [2024-11-26 17:42:12.511648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.511736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:45:35.189 [2024-11-26 17:42:12.511765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:45:35.189 [2024-11-26 17:42:12.511790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.526656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.526763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:35.189 [2024-11-26 17:42:12.526794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.815 ms 00:45:35.189 [2024-11-26 17:42:12.526819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.526948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.527000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:35.189 [2024-11-26 17:42:12.527031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:45:35.189 [2024-11-26 17:42:12.527072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.527168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.527209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:35.189 [2024-11-26 17:42:12.527244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:35.189 [2024-11-26 17:42:12.527279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.527330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:35.189 [2024-11-26 17:42:12.534343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.534415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:35.189 [2024-11-26 17:42:12.534458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.034 ms 00:45:35.189 [2024-11-26 17:42:12.534490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.534558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.189 [2024-11-26 17:42:12.534591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:35.189 [2024-11-26 17:42:12.534646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:35.189 [2024-11-26 17:42:12.534681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.189 [2024-11-26 17:42:12.534749] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:45:35.189 [2024-11-26 17:42:12.534933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:35.189 [2024-11-26 17:42:12.534991] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:35.189 [2024-11-26 17:42:12.535064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:35.189 [2024-11-26 17:42:12.535110] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:35.190 [2024-11-26 17:42:12.535150] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:35.190 [2024-11-26 17:42:12.535192] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:35.190 [2024-11-26 17:42:12.535226] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:35.190 [2024-11-26 17:42:12.535258] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:35.190 [2024-11-26 17:42:12.535286] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:35.190 [2024-11-26 17:42:12.535320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.190 [2024-11-26 17:42:12.535369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:35.190 [2024-11-26 17:42:12.535401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:45:35.190 [2024-11-26 17:42:12.535429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.190 [2024-11-26 17:42:12.535536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.190 [2024-11-26 17:42:12.535584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:35.190 [2024-11-26 17:42:12.535630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:45:35.190 [2024-11-26 17:42:12.535660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.190 [2024-11-26 17:42:12.535801] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:35.190 [2024-11-26 17:42:12.535844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:35.190 [2024-11-26 17:42:12.535880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:35.190 [2024-11-26 17:42:12.535911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.535945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:35.190 [2024-11-26 17:42:12.535974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:35.190 [2024-11-26 17:42:12.536071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:35.190 [2024-11-26 17:42:12.536132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:35.190 [2024-11-26 17:42:12.536162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:35.190 [2024-11-26 17:42:12.536195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:35.190 [2024-11-26 17:42:12.536228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:35.190 [2024-11-26 17:42:12.536262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:35.190 [2024-11-26 17:42:12.536291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:35.190 [2024-11-26 17:42:12.536354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:35.190 [2024-11-26 17:42:12.536451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:35.190 [2024-11-26 17:42:12.536542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:35.190 [2024-11-26 17:42:12.536640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:35.190 [2024-11-26 17:42:12.536746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:35.190 [2024-11-26 17:42:12.536825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:35.190 [2024-11-26 17:42:12.536843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:35.190 [2024-11-26 17:42:12.536851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:35.190 [2024-11-26 17:42:12.536861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:35.190 [2024-11-26 17:42:12.536868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:35.190 [2024-11-26 17:42:12.536879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:35.190 [2024-11-26 17:42:12.536887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:35.190 [2024-11-26 17:42:12.536905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:35.190 [2024-11-26 17:42:12.536915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536922] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:35.190 [2024-11-26 17:42:12.536935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:35.190 [2024-11-26 17:42:12.536944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:35.190 [2024-11-26 17:42:12.536957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:35.190 [2024-11-26 17:42:12.536966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:35.190 [2024-11-26 17:42:12.536980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:35.190 [2024-11-26 17:42:12.536987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:35.190 [2024-11-26 17:42:12.536998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:35.190 [2024-11-26 17:42:12.537006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:35.190 [2024-11-26 17:42:12.537016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:35.190 [2024-11-26 17:42:12.537031] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:35.190 [2024-11-26 17:42:12.537050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:35.190 [2024-11-26 17:42:12.537072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:35.190 [2024-11-26 17:42:12.537081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:35.190 [2024-11-26 17:42:12.537093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:35.190 [2024-11-26 17:42:12.537101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:35.190 [2024-11-26 17:42:12.537112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:35.190 [2024-11-26 17:42:12.537120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:35.190 [2024-11-26 17:42:12.537131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:35.190 [2024-11-26 17:42:12.537139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:35.190 [2024-11-26 17:42:12.537152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:35.190 [2024-11-26 17:42:12.537201] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:35.190 [2024-11-26 17:42:12.537213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:35.190 [2024-11-26 17:42:12.537234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:35.190 [2024-11-26 17:42:12.537243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:35.190 [2024-11-26 17:42:12.537254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:35.190 [2024-11-26 17:42:12.537264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.190 [2024-11-26 17:42:12.537276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:35.190 [2024-11-26 17:42:12.537286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:45:35.190 [2024-11-26 17:42:12.537298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.190 [2024-11-26 17:42:12.537372] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:45:35.190 [2024-11-26 17:42:12.537401] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:45:38.479 [2024-11-26 17:42:15.567680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.567767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:45:38.479 [2024-11-26 17:42:15.567802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3036.149 ms 00:45:38.479 [2024-11-26 17:42:15.567815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.621861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.621931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:38.479 [2024-11-26 17:42:15.621950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.769 ms 00:45:38.479 [2024-11-26 17:42:15.621963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.622175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.622192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:38.479 [2024-11-26 17:42:15.622204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:45:38.479 [2024-11-26 17:42:15.622224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.683469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.683531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:38.479 [2024-11-26 17:42:15.683546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.316 ms 00:45:38.479 [2024-11-26 17:42:15.683559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.683647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.683661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:38.479 [2024-11-26 17:42:15.683672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:38.479 [2024-11-26 17:42:15.683700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.684662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.684695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:38.479 [2024-11-26 17:42:15.684707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:45:38.479 [2024-11-26 17:42:15.684731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.684859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.684876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:38.479 [2024-11-26 17:42:15.684886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:45:38.479 [2024-11-26 17:42:15.684900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.711680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.711802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:38.479 [2024-11-26 17:42:15.711835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.809 ms 00:45:38.479 [2024-11-26 17:42:15.711847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.742906] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:38.479 [2024-11-26 17:42:15.748660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.748794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:38.479 [2024-11-26 17:42:15.748818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.742 ms 00:45:38.479 [2024-11-26 17:42:15.748828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.839835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.840030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:45:38.479 [2024-11-26 17:42:15.840055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.100 ms 00:45:38.479 [2024-11-26 17:42:15.840064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.840303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.840315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:38.479 [2024-11-26 17:42:15.840331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:45:38.479 [2024-11-26 17:42:15.840338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.875875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.875913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:45:38.479 [2024-11-26 17:42:15.875929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.553 ms 00:45:38.479 [2024-11-26 17:42:15.875937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.913518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.913634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:45:38.479 [2024-11-26 17:42:15.913658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.602 ms 00:45:38.479 [2024-11-26 17:42:15.913667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.479 [2024-11-26 17:42:15.914507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.479 [2024-11-26 17:42:15.914538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:38.479 [2024-11-26 17:42:15.914557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:45:38.479 [2024-11-26 17:42:15.914566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.019772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.739 [2024-11-26 17:42:16.019959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:45:38.739 [2024-11-26 17:42:16.019989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.230 ms 00:45:38.739 [2024-11-26 17:42:16.019998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.062502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.739 [2024-11-26 17:42:16.062594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:45:38.739 [2024-11-26 17:42:16.062636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.482 ms 00:45:38.739 [2024-11-26 17:42:16.062649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.104696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.739 [2024-11-26 17:42:16.104765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:45:38.739 [2024-11-26 17:42:16.104783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.035 ms 00:45:38.739 [2024-11-26 17:42:16.104792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.142070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.739 [2024-11-26 17:42:16.142132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:38.739 [2024-11-26 17:42:16.142152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.290 ms 00:45:38.739 [2024-11-26 17:42:16.142162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.142223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.739 [2024-11-26 17:42:16.142234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:38.739 [2024-11-26 17:42:16.142252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:38.739 [2024-11-26 17:42:16.142261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.142383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.739 [2024-11-26 17:42:16.142399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:38.739 [2024-11-26 17:42:16.142411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:45:38.739 [2024-11-26 17:42:16.142420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.739 [2024-11-26 17:42:16.143953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3643.425 ms, result 0 00:45:38.739 { 00:45:38.739 "name": "ftl0", 00:45:38.739 "uuid": "577621b7-83e5-45f3-93b3-fbcbcb8e7851" 00:45:38.739 } 00:45:38.739 17:42:16 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:45:38.739 17:42:16 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:45:39.050 17:42:16 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:45:39.050 17:42:16 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:45:39.311 [2024-11-26 17:42:16.598060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.598139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:39.311 [2024-11-26 17:42:16.598156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:39.311 [2024-11-26 17:42:16.598167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.598192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:39.311 [2024-11-26 17:42:16.603052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.603087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:39.311 [2024-11-26 17:42:16.603100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:45:39.311 [2024-11-26 17:42:16.603108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.603407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.603418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:39.311 [2024-11-26 17:42:16.603429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:45:39.311 [2024-11-26 17:42:16.603438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.605932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.605954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:39.311 [2024-11-26 17:42:16.605966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:45:39.311 [2024-11-26 17:42:16.605974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.611023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.611078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:39.311 [2024-11-26 17:42:16.611090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:45:39.311 [2024-11-26 17:42:16.611098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.647630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.647672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:39.311 [2024-11-26 17:42:16.647687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.521 ms 00:45:39.311 [2024-11-26 17:42:16.647695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.670617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.670743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:39.311 [2024-11-26 17:42:16.670767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.907 ms 00:45:39.311 [2024-11-26 17:42:16.670777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.670970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.670985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:39.311 [2024-11-26 17:42:16.670999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:45:39.311 [2024-11-26 17:42:16.671008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.714536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.714593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:39.311 [2024-11-26 17:42:16.714635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.579 ms 00:45:39.311 [2024-11-26 17:42:16.714645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.311 [2024-11-26 17:42:16.752914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.311 [2024-11-26 17:42:16.753028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:39.311 [2024-11-26 17:42:16.753051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.276 ms 00:45:39.311 [2024-11-26 17:42:16.753060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.573 [2024-11-26 17:42:16.793858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.573 [2024-11-26 17:42:16.793938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:39.573 [2024-11-26 17:42:16.793958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.801 ms 00:45:39.573 [2024-11-26 17:42:16.793967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.573 [2024-11-26 17:42:16.834525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.573 [2024-11-26 17:42:16.834585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:39.573 [2024-11-26 17:42:16.834618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.449 ms 00:45:39.573 [2024-11-26 17:42:16.834636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.573 [2024-11-26 17:42:16.834700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:39.573 [2024-11-26 17:42:16.834717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.834996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:39.573 [2024-11-26 17:42:16.835266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:39.574 [2024-11-26 17:42:16.835737] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:39.574 [2024-11-26 17:42:16.835747] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 577621b7-83e5-45f3-93b3-fbcbcb8e7851 00:45:39.574 [2024-11-26 17:42:16.835755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:39.574 [2024-11-26 17:42:16.835768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:39.574 [2024-11-26 17:42:16.835779] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:39.574 [2024-11-26 17:42:16.835790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:39.574 [2024-11-26 17:42:16.835798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:39.574 [2024-11-26 17:42:16.835809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:39.574 [2024-11-26 17:42:16.835817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:39.574 [2024-11-26 17:42:16.835826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:39.574 [2024-11-26 17:42:16.835832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:39.574 [2024-11-26 17:42:16.835842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.574 [2024-11-26 17:42:16.835851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:39.574 [2024-11-26 17:42:16.835861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:45:39.574 [2024-11-26 17:42:16.835872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.574 [2024-11-26 17:42:16.857850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.574 [2024-11-26 17:42:16.857905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:39.574 [2024-11-26 17:42:16.857921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.947 ms 00:45:39.574 [2024-11-26 17:42:16.857931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.574 [2024-11-26 17:42:16.858580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:39.574 [2024-11-26 17:42:16.858600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:39.574 [2024-11-26 17:42:16.858632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:45:39.574 [2024-11-26 17:42:16.858640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.574 [2024-11-26 17:42:16.930260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.575 [2024-11-26 17:42:16.930336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:39.575 [2024-11-26 17:42:16.930371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.575 [2024-11-26 17:42:16.930381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.575 [2024-11-26 17:42:16.930491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.575 [2024-11-26 17:42:16.930502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:39.575 [2024-11-26 17:42:16.930520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.575 [2024-11-26 17:42:16.930529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.575 [2024-11-26 17:42:16.930742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.575 [2024-11-26 17:42:16.930757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:39.575 [2024-11-26 17:42:16.930769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.575 [2024-11-26 17:42:16.930777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.575 [2024-11-26 17:42:16.930806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.575 [2024-11-26 17:42:16.930831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:39.575 [2024-11-26 17:42:16.930843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.575 [2024-11-26 17:42:16.930855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.072650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.072758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:39.835 [2024-11-26 17:42:17.072780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.072790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.181130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.181211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:39.835 [2024-11-26 17:42:17.181249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.181258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.181420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.181431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:39.835 [2024-11-26 17:42:17.181443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.181451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.181514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.181524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:39.835 [2024-11-26 17:42:17.181536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.181544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.181739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.181756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:39.835 [2024-11-26 17:42:17.181768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.181777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.181837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.181850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:39.835 [2024-11-26 17:42:17.181863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.181871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.181927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.181937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:39.835 [2024-11-26 17:42:17.181951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.181960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.182019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:39.835 [2024-11-26 17:42:17.182029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:39.835 [2024-11-26 17:42:17.182041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:39.835 [2024-11-26 17:42:17.182050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:39.835 [2024-11-26 17:42:17.182218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 585.244 ms, result 0 00:45:39.835 true 00:45:39.835 17:42:17 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79808 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79808 ']' 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79808 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79808 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:39.835 killing process with pid 79808 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79808' 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79808 00:45:39.835 17:42:17 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79808 00:45:47.959 17:42:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:45:50.498 262144+0 records in 00:45:50.498 262144+0 records out 00:45:50.498 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.75375 s, 286 MB/s 00:45:50.498 17:42:27 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:45:52.406 17:42:29 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:52.406 [2024-11-26 17:42:29.527489] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:45:52.406 [2024-11-26 17:42:29.527677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80060 ] 00:45:52.406 [2024-11-26 17:42:29.712787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:52.666 [2024-11-26 17:42:29.855790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:52.926 [2024-11-26 17:42:30.287963] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:52.926 [2024-11-26 17:42:30.288051] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:53.186 [2024-11-26 17:42:30.452196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.186 [2024-11-26 17:42:30.452258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:53.186 [2024-11-26 17:42:30.452272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:53.186 [2024-11-26 17:42:30.452280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.186 [2024-11-26 17:42:30.452329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.186 [2024-11-26 17:42:30.452342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:53.186 [2024-11-26 17:42:30.452351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:45:53.186 [2024-11-26 17:42:30.452358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.186 [2024-11-26 17:42:30.452377] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:53.186 [2024-11-26 17:42:30.453292] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:53.186 [2024-11-26 17:42:30.453320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.186 [2024-11-26 17:42:30.453328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:53.186 [2024-11-26 17:42:30.453336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:45:53.186 [2024-11-26 17:42:30.453344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.186 [2024-11-26 17:42:30.455867] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:53.186 [2024-11-26 17:42:30.475956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.186 [2024-11-26 17:42:30.475989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:53.186 [2024-11-26 17:42:30.476001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.128 ms 00:45:53.187 [2024-11-26 17:42:30.476010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.476103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.476117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:53.187 [2024-11-26 17:42:30.476126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:45:53.187 [2024-11-26 17:42:30.476134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.488667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.488698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:53.187 [2024-11-26 17:42:30.488709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.487 ms 00:45:53.187 [2024-11-26 17:42:30.488727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.488829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.488842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:53.187 [2024-11-26 17:42:30.488851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:45:53.187 [2024-11-26 17:42:30.488858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.488913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.488922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:53.187 [2024-11-26 17:42:30.488930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:53.187 [2024-11-26 17:42:30.488937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.488971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:53.187 [2024-11-26 17:42:30.494868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.494897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:53.187 [2024-11-26 17:42:30.494913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.918 ms 00:45:53.187 [2024-11-26 17:42:30.494921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.494951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.494959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:53.187 [2024-11-26 17:42:30.494967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:53.187 [2024-11-26 17:42:30.494975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.495022] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:53.187 [2024-11-26 17:42:30.495054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:53.187 [2024-11-26 17:42:30.495090] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:53.187 [2024-11-26 17:42:30.495113] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:53.187 [2024-11-26 17:42:30.495202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:53.187 [2024-11-26 17:42:30.495213] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:53.187 [2024-11-26 17:42:30.495224] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:53.187 [2024-11-26 17:42:30.495234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495244] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495252] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:53.187 [2024-11-26 17:42:30.495261] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:53.187 [2024-11-26 17:42:30.495274] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:53.187 [2024-11-26 17:42:30.495281] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:53.187 [2024-11-26 17:42:30.495289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.495297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:53.187 [2024-11-26 17:42:30.495305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:45:53.187 [2024-11-26 17:42:30.495313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.495382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.187 [2024-11-26 17:42:30.495390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:53.187 [2024-11-26 17:42:30.495398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:45:53.187 [2024-11-26 17:42:30.495405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.187 [2024-11-26 17:42:30.495506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:53.187 [2024-11-26 17:42:30.495520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:53.187 [2024-11-26 17:42:30.495528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:53.187 [2024-11-26 17:42:30.495550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:53.187 [2024-11-26 17:42:30.495573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:53.187 [2024-11-26 17:42:30.495588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:53.187 [2024-11-26 17:42:30.495596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:53.187 [2024-11-26 17:42:30.495602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:53.187 [2024-11-26 17:42:30.495637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:53.187 [2024-11-26 17:42:30.495644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:53.187 [2024-11-26 17:42:30.495651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:53.187 [2024-11-26 17:42:30.495665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:53.187 [2024-11-26 17:42:30.495686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:53.187 [2024-11-26 17:42:30.495705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:53.187 [2024-11-26 17:42:30.495724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:53.187 [2024-11-26 17:42:30.495743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:53.187 [2024-11-26 17:42:30.495755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:53.187 [2024-11-26 17:42:30.495761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:53.187 [2024-11-26 17:42:30.495767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:53.187 [2024-11-26 17:42:30.495773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:53.187 [2024-11-26 17:42:30.495779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:53.187 [2024-11-26 17:42:30.495785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:53.187 [2024-11-26 17:42:30.495791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:53.188 [2024-11-26 17:42:30.495799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:53.188 [2024-11-26 17:42:30.495805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:53.188 [2024-11-26 17:42:30.495812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:53.188 [2024-11-26 17:42:30.495818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:53.188 [2024-11-26 17:42:30.495825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:53.188 [2024-11-26 17:42:30.495832] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:53.188 [2024-11-26 17:42:30.495839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:53.188 [2024-11-26 17:42:30.495847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:53.188 [2024-11-26 17:42:30.495853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:53.188 [2024-11-26 17:42:30.495861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:53.188 [2024-11-26 17:42:30.495867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:53.188 [2024-11-26 17:42:30.495873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:53.188 [2024-11-26 17:42:30.495880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:53.188 [2024-11-26 17:42:30.495886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:53.188 [2024-11-26 17:42:30.495893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:53.188 [2024-11-26 17:42:30.495901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:53.188 [2024-11-26 17:42:30.495911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.495925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:53.188 [2024-11-26 17:42:30.495932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:53.188 [2024-11-26 17:42:30.495942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:53.188 [2024-11-26 17:42:30.495950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:53.188 [2024-11-26 17:42:30.495960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:53.188 [2024-11-26 17:42:30.495968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:53.188 [2024-11-26 17:42:30.495977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:53.188 [2024-11-26 17:42:30.495985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:53.188 [2024-11-26 17:42:30.495994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:53.188 [2024-11-26 17:42:30.496001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.496009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.496017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.496027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.496034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:53.188 [2024-11-26 17:42:30.496043] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:53.188 [2024-11-26 17:42:30.496052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.496064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:53.188 [2024-11-26 17:42:30.496072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:53.188 [2024-11-26 17:42:30.496082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:53.188 [2024-11-26 17:42:30.496091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:53.188 [2024-11-26 17:42:30.496101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.496109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:53.188 [2024-11-26 17:42:30.496119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:45:53.188 [2024-11-26 17:42:30.496126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.188 [2024-11-26 17:42:30.548246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.548302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:53.188 [2024-11-26 17:42:30.548320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.142 ms 00:45:53.188 [2024-11-26 17:42:30.548335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.188 [2024-11-26 17:42:30.548443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.548453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:53.188 [2024-11-26 17:42:30.548463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:45:53.188 [2024-11-26 17:42:30.548471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.188 [2024-11-26 17:42:30.613408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.613486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:53.188 [2024-11-26 17:42:30.613500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.950 ms 00:45:53.188 [2024-11-26 17:42:30.613511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.188 [2024-11-26 17:42:30.613579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.613596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:53.188 [2024-11-26 17:42:30.613618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:45:53.188 [2024-11-26 17:42:30.613628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.188 [2024-11-26 17:42:30.614477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.614501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:53.188 [2024-11-26 17:42:30.614510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:45:53.188 [2024-11-26 17:42:30.614520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.188 [2024-11-26 17:42:30.614664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.188 [2024-11-26 17:42:30.614678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:53.188 [2024-11-26 17:42:30.614698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:45:53.188 [2024-11-26 17:42:30.614706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.639073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.639125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:53.448 [2024-11-26 17:42:30.639138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.388 ms 00:45:53.448 [2024-11-26 17:42:30.639164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.659955] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:45:53.448 [2024-11-26 17:42:30.660001] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:53.448 [2024-11-26 17:42:30.660016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.660027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:53.448 [2024-11-26 17:42:30.660037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.740 ms 00:45:53.448 [2024-11-26 17:42:30.660046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.689790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.689839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:53.448 [2024-11-26 17:42:30.689851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.753 ms 00:45:53.448 [2024-11-26 17:42:30.689860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.708779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.708818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:53.448 [2024-11-26 17:42:30.708830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.911 ms 00:45:53.448 [2024-11-26 17:42:30.708838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.728127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.728162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:53.448 [2024-11-26 17:42:30.728174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.287 ms 00:45:53.448 [2024-11-26 17:42:30.728182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.729100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.729131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:53.448 [2024-11-26 17:42:30.729143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:45:53.448 [2024-11-26 17:42:30.729161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.834524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.834625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:53.448 [2024-11-26 17:42:30.834671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.539 ms 00:45:53.448 [2024-11-26 17:42:30.834694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.849669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:53.448 [2024-11-26 17:42:30.855390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.855443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:53.448 [2024-11-26 17:42:30.855460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.622 ms 00:45:53.448 [2024-11-26 17:42:30.855485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.855651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.855665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:53.448 [2024-11-26 17:42:30.855676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:53.448 [2024-11-26 17:42:30.855685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.855778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.855789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:53.448 [2024-11-26 17:42:30.855799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:45:53.448 [2024-11-26 17:42:30.855807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.855831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.855841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:53.448 [2024-11-26 17:42:30.855849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:53.448 [2024-11-26 17:42:30.855857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.448 [2024-11-26 17:42:30.855937] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:53.448 [2024-11-26 17:42:30.855962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.448 [2024-11-26 17:42:30.855971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:53.448 [2024-11-26 17:42:30.855980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:45:53.448 [2024-11-26 17:42:30.855988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.708 [2024-11-26 17:42:30.894515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.708 [2024-11-26 17:42:30.894573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:53.708 [2024-11-26 17:42:30.894587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.578 ms 00:45:53.708 [2024-11-26 17:42:30.894605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.708 [2024-11-26 17:42:30.894707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:53.708 [2024-11-26 17:42:30.894719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:53.708 [2024-11-26 17:42:30.894729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:45:53.708 [2024-11-26 17:42:30.894736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:53.708 [2024-11-26 17:42:30.896350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.445 ms, result 0 00:45:54.645  [2024-11-26T17:42:33.030Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-26T17:42:33.966Z] Copying: 60/1024 [MB] (30 MBps) [2024-11-26T17:42:34.914Z] Copying: 91/1024 [MB] (30 MBps) [2024-11-26T17:42:36.293Z] Copying: 121/1024 [MB] (30 MBps) [2024-11-26T17:42:37.231Z] Copying: 152/1024 [MB] (31 MBps) [2024-11-26T17:42:38.168Z] Copying: 183/1024 [MB] (30 MBps) [2024-11-26T17:42:39.106Z] Copying: 213/1024 [MB] (30 MBps) [2024-11-26T17:42:40.052Z] Copying: 245/1024 [MB] (31 MBps) [2024-11-26T17:42:40.993Z] Copying: 274/1024 [MB] (29 MBps) [2024-11-26T17:42:41.933Z] Copying: 303/1024 [MB] (28 MBps) [2024-11-26T17:42:43.315Z] Copying: 332/1024 [MB] (28 MBps) [2024-11-26T17:42:43.887Z] Copying: 361/1024 [MB] (28 MBps) [2024-11-26T17:42:44.905Z] Copying: 390/1024 [MB] (29 MBps) [2024-11-26T17:42:46.288Z] Copying: 419/1024 [MB] (29 MBps) [2024-11-26T17:42:47.229Z] Copying: 448/1024 [MB] (28 MBps) [2024-11-26T17:42:48.169Z] Copying: 477/1024 [MB] (29 MBps) [2024-11-26T17:42:49.108Z] Copying: 506/1024 [MB] (28 MBps) [2024-11-26T17:42:50.048Z] Copying: 535/1024 [MB] (28 MBps) [2024-11-26T17:42:50.988Z] Copying: 564/1024 [MB] (28 MBps) [2024-11-26T17:42:51.928Z] Copying: 593/1024 [MB] (29 MBps) [2024-11-26T17:42:52.868Z] Copying: 622/1024 [MB] (28 MBps) [2024-11-26T17:42:54.258Z] Copying: 650/1024 [MB] (27 MBps) [2024-11-26T17:42:55.210Z] Copying: 678/1024 [MB] (28 MBps) [2024-11-26T17:42:56.148Z] Copying: 706/1024 [MB] (28 MBps) [2024-11-26T17:42:57.087Z] Copying: 735/1024 [MB] (28 MBps) [2024-11-26T17:42:58.025Z] Copying: 764/1024 [MB] (28 MBps) [2024-11-26T17:42:58.964Z] Copying: 792/1024 [MB] (28 MBps) [2024-11-26T17:42:59.901Z] Copying: 821/1024 [MB] (28 MBps) [2024-11-26T17:43:01.282Z] Copying: 850/1024 [MB] (28 MBps) [2024-11-26T17:43:01.852Z] Copying: 879/1024 [MB] (29 MBps) [2024-11-26T17:43:03.228Z] Copying: 908/1024 [MB] (28 MBps) [2024-11-26T17:43:04.166Z] Copying: 937/1024 [MB] (28 MBps) [2024-11-26T17:43:05.104Z] Copying: 966/1024 [MB] (29 MBps) [2024-11-26T17:43:06.043Z] Copying: 994/1024 [MB] (28 MBps) [2024-11-26T17:43:06.043Z] Copying: 1023/1024 [MB] (28 MBps) [2024-11-26T17:43:06.043Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-26 17:43:05.868092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.868142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:28.597 [2024-11-26 17:43:05.868157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:28.597 [2024-11-26 17:43:05.868165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.868186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:28.597 [2024-11-26 17:43:05.872915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.872948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:28.597 [2024-11-26 17:43:05.872969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:46:28.597 [2024-11-26 17:43:05.872977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.875073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.875114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:28.597 [2024-11-26 17:43:05.875124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:46:28.597 [2024-11-26 17:43:05.875132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.892428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.892463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:28.597 [2024-11-26 17:43:05.892474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:46:28.597 [2024-11-26 17:43:05.892482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.897471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.897502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:28.597 [2024-11-26 17:43:05.897511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:46:28.597 [2024-11-26 17:43:05.897519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.933167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.933200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:28.597 [2024-11-26 17:43:05.933211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.657 ms 00:46:28.597 [2024-11-26 17:43:05.933218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.953097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.953130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:28.597 [2024-11-26 17:43:05.953141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.885 ms 00:46:28.597 [2024-11-26 17:43:05.953164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.953280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.953300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:28.597 [2024-11-26 17:43:05.953309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:46:28.597 [2024-11-26 17:43:05.953316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:05.987416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:05.987448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:28.597 [2024-11-26 17:43:05.987459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.152 ms 00:46:28.597 [2024-11-26 17:43:05.987466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.597 [2024-11-26 17:43:06.022719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.597 [2024-11-26 17:43:06.022794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:28.597 [2024-11-26 17:43:06.022808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.289 ms 00:46:28.597 [2024-11-26 17:43:06.022815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.858 [2024-11-26 17:43:06.057179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.858 [2024-11-26 17:43:06.057209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:28.858 [2024-11-26 17:43:06.057219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.396 ms 00:46:28.858 [2024-11-26 17:43:06.057226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.858 [2024-11-26 17:43:06.090556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.858 [2024-11-26 17:43:06.090586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:28.858 [2024-11-26 17:43:06.090597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.326 ms 00:46:28.858 [2024-11-26 17:43:06.090604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.858 [2024-11-26 17:43:06.090651] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:28.858 [2024-11-26 17:43:06.090675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.090996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:28.858 [2024-11-26 17:43:06.091097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:28.859 [2024-11-26 17:43:06.091473] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:28.859 [2024-11-26 17:43:06.091487] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 577621b7-83e5-45f3-93b3-fbcbcb8e7851 00:46:28.859 [2024-11-26 17:43:06.091495] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:28.859 [2024-11-26 17:43:06.091502] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:28.859 [2024-11-26 17:43:06.091508] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:28.859 [2024-11-26 17:43:06.091516] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:28.859 [2024-11-26 17:43:06.091523] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:28.859 [2024-11-26 17:43:06.091543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:28.859 [2024-11-26 17:43:06.091550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:28.859 [2024-11-26 17:43:06.091556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:28.859 [2024-11-26 17:43:06.091562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:28.859 [2024-11-26 17:43:06.091569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.859 [2024-11-26 17:43:06.091577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:28.859 [2024-11-26 17:43:06.091585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:46:28.859 [2024-11-26 17:43:06.091593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.859 [2024-11-26 17:43:06.112222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.859 [2024-11-26 17:43:06.112251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:28.859 [2024-11-26 17:43:06.112261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.627 ms 00:46:28.859 [2024-11-26 17:43:06.112285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.859 [2024-11-26 17:43:06.112906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:28.859 [2024-11-26 17:43:06.112917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:28.859 [2024-11-26 17:43:06.112925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:46:28.859 [2024-11-26 17:43:06.112941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.859 [2024-11-26 17:43:06.168670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:28.859 [2024-11-26 17:43:06.168727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:28.859 [2024-11-26 17:43:06.168740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:28.859 [2024-11-26 17:43:06.168749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.859 [2024-11-26 17:43:06.168825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:28.859 [2024-11-26 17:43:06.168833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:28.859 [2024-11-26 17:43:06.168841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:28.859 [2024-11-26 17:43:06.168854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.859 [2024-11-26 17:43:06.168938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:28.859 [2024-11-26 17:43:06.168950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:28.859 [2024-11-26 17:43:06.168958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:28.859 [2024-11-26 17:43:06.168965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:28.859 [2024-11-26 17:43:06.168983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:28.859 [2024-11-26 17:43:06.168991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:28.859 [2024-11-26 17:43:06.168998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:28.859 [2024-11-26 17:43:06.169005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.119 [2024-11-26 17:43:06.304290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.119 [2024-11-26 17:43:06.304369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:29.119 [2024-11-26 17:43:06.304384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.119 [2024-11-26 17:43:06.304409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.119 [2024-11-26 17:43:06.412508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.119 [2024-11-26 17:43:06.412588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:29.119 [2024-11-26 17:43:06.412602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.119 [2024-11-26 17:43:06.412635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.412775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.120 [2024-11-26 17:43:06.412785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:29.120 [2024-11-26 17:43:06.412794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.120 [2024-11-26 17:43:06.412802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.412852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.120 [2024-11-26 17:43:06.412861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:29.120 [2024-11-26 17:43:06.412869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.120 [2024-11-26 17:43:06.412877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.412993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.120 [2024-11-26 17:43:06.413006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:29.120 [2024-11-26 17:43:06.413013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.120 [2024-11-26 17:43:06.413021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.413058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.120 [2024-11-26 17:43:06.413068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:29.120 [2024-11-26 17:43:06.413077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.120 [2024-11-26 17:43:06.413084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.413128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.120 [2024-11-26 17:43:06.413142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:29.120 [2024-11-26 17:43:06.413150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.120 [2024-11-26 17:43:06.413157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.413214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:29.120 [2024-11-26 17:43:06.413223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:29.120 [2024-11-26 17:43:06.413231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:29.120 [2024-11-26 17:43:06.413238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.120 [2024-11-26 17:43:06.413374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.294 ms, result 0 00:46:31.026 00:46:31.026 00:46:31.026 17:43:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:46:31.286 [2024-11-26 17:43:08.504667] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:46:31.286 [2024-11-26 17:43:08.504785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80446 ] 00:46:31.286 [2024-11-26 17:43:08.684421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:31.545 [2024-11-26 17:43:08.813826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:31.805 [2024-11-26 17:43:09.217142] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:31.805 [2024-11-26 17:43:09.217216] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:32.065 [2024-11-26 17:43:09.378063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.378128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:32.065 [2024-11-26 17:43:09.378143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:32.065 [2024-11-26 17:43:09.378151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.378197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.378210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:32.065 [2024-11-26 17:43:09.378218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:46:32.065 [2024-11-26 17:43:09.378225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.378243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:32.065 [2024-11-26 17:43:09.379191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:32.065 [2024-11-26 17:43:09.379216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.379224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:32.065 [2024-11-26 17:43:09.379233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:46:32.065 [2024-11-26 17:43:09.379240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.381760] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:32.065 [2024-11-26 17:43:09.401056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.401091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:32.065 [2024-11-26 17:43:09.401103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.335 ms 00:46:32.065 [2024-11-26 17:43:09.401126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.401189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.401199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:32.065 [2024-11-26 17:43:09.401208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:46:32.065 [2024-11-26 17:43:09.401216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.414034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.414063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:32.065 [2024-11-26 17:43:09.414074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.784 ms 00:46:32.065 [2024-11-26 17:43:09.414085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.414170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.414182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:32.065 [2024-11-26 17:43:09.414192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:46:32.065 [2024-11-26 17:43:09.414199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.414251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.414261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:32.065 [2024-11-26 17:43:09.414269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:32.065 [2024-11-26 17:43:09.414277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.414308] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:32.065 [2024-11-26 17:43:09.420098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.420125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:32.065 [2024-11-26 17:43:09.420138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.810 ms 00:46:32.065 [2024-11-26 17:43:09.420161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.420190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.420199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:32.065 [2024-11-26 17:43:09.420207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:46:32.065 [2024-11-26 17:43:09.420214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.420248] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:32.065 [2024-11-26 17:43:09.420271] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:32.065 [2024-11-26 17:43:09.420304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:32.065 [2024-11-26 17:43:09.420324] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:32.065 [2024-11-26 17:43:09.420412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:32.065 [2024-11-26 17:43:09.420422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:32.065 [2024-11-26 17:43:09.420432] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:32.065 [2024-11-26 17:43:09.420442] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:32.065 [2024-11-26 17:43:09.420452] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:32.065 [2024-11-26 17:43:09.420461] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:32.065 [2024-11-26 17:43:09.420469] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:32.065 [2024-11-26 17:43:09.420479] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:32.065 [2024-11-26 17:43:09.420486] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:32.065 [2024-11-26 17:43:09.420494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.065 [2024-11-26 17:43:09.420501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:32.065 [2024-11-26 17:43:09.420509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:46:32.065 [2024-11-26 17:43:09.420516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.065 [2024-11-26 17:43:09.420582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.066 [2024-11-26 17:43:09.420592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:32.066 [2024-11-26 17:43:09.420599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:46:32.066 [2024-11-26 17:43:09.420607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.066 [2024-11-26 17:43:09.420715] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:32.066 [2024-11-26 17:43:09.420730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:32.066 [2024-11-26 17:43:09.420738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:32.066 [2024-11-26 17:43:09.420763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:32.066 [2024-11-26 17:43:09.420785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:32.066 [2024-11-26 17:43:09.420815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:32.066 [2024-11-26 17:43:09.420822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:32.066 [2024-11-26 17:43:09.420828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:32.066 [2024-11-26 17:43:09.420847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:32.066 [2024-11-26 17:43:09.420855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:32.066 [2024-11-26 17:43:09.420862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:32.066 [2024-11-26 17:43:09.420875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:32.066 [2024-11-26 17:43:09.420895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:32.066 [2024-11-26 17:43:09.420915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:32.066 [2024-11-26 17:43:09.420942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:32.066 [2024-11-26 17:43:09.420962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:32.066 [2024-11-26 17:43:09.420976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:32.066 [2024-11-26 17:43:09.420982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:32.066 [2024-11-26 17:43:09.420989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:32.066 [2024-11-26 17:43:09.420996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:32.066 [2024-11-26 17:43:09.421002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:32.066 [2024-11-26 17:43:09.421008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:32.066 [2024-11-26 17:43:09.421015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:32.066 [2024-11-26 17:43:09.421022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:32.066 [2024-11-26 17:43:09.421028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.421035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:32.066 [2024-11-26 17:43:09.421041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:32.066 [2024-11-26 17:43:09.421048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.421054] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:32.066 [2024-11-26 17:43:09.421062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:32.066 [2024-11-26 17:43:09.421068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:32.066 [2024-11-26 17:43:09.421075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:32.066 [2024-11-26 17:43:09.421082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:32.066 [2024-11-26 17:43:09.421088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:32.066 [2024-11-26 17:43:09.421095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:32.066 [2024-11-26 17:43:09.421102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:32.066 [2024-11-26 17:43:09.421108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:32.066 [2024-11-26 17:43:09.421114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:32.066 [2024-11-26 17:43:09.421123] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:32.066 [2024-11-26 17:43:09.421132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:32.066 [2024-11-26 17:43:09.421152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:32.066 [2024-11-26 17:43:09.421159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:32.066 [2024-11-26 17:43:09.421166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:32.066 [2024-11-26 17:43:09.421174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:32.066 [2024-11-26 17:43:09.421180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:32.066 [2024-11-26 17:43:09.421187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:32.066 [2024-11-26 17:43:09.421195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:32.066 [2024-11-26 17:43:09.421202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:32.066 [2024-11-26 17:43:09.421210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:32.066 [2024-11-26 17:43:09.421246] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:32.066 [2024-11-26 17:43:09.421254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:32.066 [2024-11-26 17:43:09.421268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:32.066 [2024-11-26 17:43:09.421275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:32.066 [2024-11-26 17:43:09.421282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:32.066 [2024-11-26 17:43:09.421291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.067 [2024-11-26 17:43:09.421299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:32.067 [2024-11-26 17:43:09.421307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:46:32.067 [2024-11-26 17:43:09.421314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.067 [2024-11-26 17:43:09.470687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.067 [2024-11-26 17:43:09.470820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:32.067 [2024-11-26 17:43:09.470837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.416 ms 00:46:32.067 [2024-11-26 17:43:09.470851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.067 [2024-11-26 17:43:09.470946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.067 [2024-11-26 17:43:09.470955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:32.067 [2024-11-26 17:43:09.470964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:46:32.067 [2024-11-26 17:43:09.470971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.533707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.533751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:32.327 [2024-11-26 17:43:09.533764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.762 ms 00:46:32.327 [2024-11-26 17:43:09.533773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.533819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.533833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:32.327 [2024-11-26 17:43:09.533842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:32.327 [2024-11-26 17:43:09.533849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.534689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.534703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:32.327 [2024-11-26 17:43:09.534711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:46:32.327 [2024-11-26 17:43:09.534719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.534845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.534858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:32.327 [2024-11-26 17:43:09.534873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:46:32.327 [2024-11-26 17:43:09.534881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.557801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.557843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:32.327 [2024-11-26 17:43:09.557855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.942 ms 00:46:32.327 [2024-11-26 17:43:09.557862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.577853] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:32.327 [2024-11-26 17:43:09.577888] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:32.327 [2024-11-26 17:43:09.577901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.577909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:32.327 [2024-11-26 17:43:09.577918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.952 ms 00:46:32.327 [2024-11-26 17:43:09.577926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.606163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.606199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:32.327 [2024-11-26 17:43:09.606211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.252 ms 00:46:32.327 [2024-11-26 17:43:09.606220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.623243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.623275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:32.327 [2024-11-26 17:43:09.623285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.000 ms 00:46:32.327 [2024-11-26 17:43:09.623293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.640448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.640478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:32.327 [2024-11-26 17:43:09.640487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.155 ms 00:46:32.327 [2024-11-26 17:43:09.640494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.641275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.641304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:32.327 [2024-11-26 17:43:09.641317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:46:32.327 [2024-11-26 17:43:09.641324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.738640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.738706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:32.327 [2024-11-26 17:43:09.738729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.480 ms 00:46:32.327 [2024-11-26 17:43:09.738737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.750023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:32.327 [2024-11-26 17:43:09.754980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.755011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:32.327 [2024-11-26 17:43:09.755024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.196 ms 00:46:32.327 [2024-11-26 17:43:09.755048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.327 [2024-11-26 17:43:09.755193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.327 [2024-11-26 17:43:09.755204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:32.328 [2024-11-26 17:43:09.755218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:32.328 [2024-11-26 17:43:09.755226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.328 [2024-11-26 17:43:09.755310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.328 [2024-11-26 17:43:09.755320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:32.328 [2024-11-26 17:43:09.755329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:46:32.328 [2024-11-26 17:43:09.755337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.328 [2024-11-26 17:43:09.755357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.328 [2024-11-26 17:43:09.755366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:32.328 [2024-11-26 17:43:09.755374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:32.328 [2024-11-26 17:43:09.755382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.328 [2024-11-26 17:43:09.755423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:32.328 [2024-11-26 17:43:09.755433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.328 [2024-11-26 17:43:09.755442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:32.328 [2024-11-26 17:43:09.755449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:46:32.328 [2024-11-26 17:43:09.755457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.587 [2024-11-26 17:43:09.793441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.587 [2024-11-26 17:43:09.793497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:32.587 [2024-11-26 17:43:09.793516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.037 ms 00:46:32.587 [2024-11-26 17:43:09.793525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.587 [2024-11-26 17:43:09.793622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:32.587 [2024-11-26 17:43:09.793633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:32.587 [2024-11-26 17:43:09.793643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:46:32.587 [2024-11-26 17:43:09.793652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:32.587 [2024-11-26 17:43:09.795235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.393 ms, result 0 00:46:33.525  [2024-11-26T17:43:12.350Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-26T17:43:13.289Z] Copying: 61/1024 [MB] (30 MBps) [2024-11-26T17:43:14.227Z] Copying: 90/1024 [MB] (29 MBps) [2024-11-26T17:43:15.165Z] Copying: 122/1024 [MB] (31 MBps) [2024-11-26T17:43:16.111Z] Copying: 152/1024 [MB] (30 MBps) [2024-11-26T17:43:17.048Z] Copying: 183/1024 [MB] (30 MBps) [2024-11-26T17:43:17.985Z] Copying: 213/1024 [MB] (30 MBps) [2024-11-26T17:43:19.363Z] Copying: 244/1024 [MB] (30 MBps) [2024-11-26T17:43:20.298Z] Copying: 274/1024 [MB] (30 MBps) [2024-11-26T17:43:21.236Z] Copying: 303/1024 [MB] (29 MBps) [2024-11-26T17:43:22.178Z] Copying: 334/1024 [MB] (30 MBps) [2024-11-26T17:43:23.117Z] Copying: 365/1024 [MB] (30 MBps) [2024-11-26T17:43:24.052Z] Copying: 396/1024 [MB] (30 MBps) [2024-11-26T17:43:24.991Z] Copying: 427/1024 [MB] (31 MBps) [2024-11-26T17:43:26.371Z] Copying: 457/1024 [MB] (30 MBps) [2024-11-26T17:43:26.940Z] Copying: 488/1024 [MB] (30 MBps) [2024-11-26T17:43:28.314Z] Copying: 519/1024 [MB] (31 MBps) [2024-11-26T17:43:29.251Z] Copying: 550/1024 [MB] (30 MBps) [2024-11-26T17:43:30.187Z] Copying: 580/1024 [MB] (30 MBps) [2024-11-26T17:43:31.123Z] Copying: 610/1024 [MB] (30 MBps) [2024-11-26T17:43:32.062Z] Copying: 640/1024 [MB] (29 MBps) [2024-11-26T17:43:33.001Z] Copying: 670/1024 [MB] (29 MBps) [2024-11-26T17:43:33.939Z] Copying: 701/1024 [MB] (30 MBps) [2024-11-26T17:43:35.319Z] Copying: 732/1024 [MB] (31 MBps) [2024-11-26T17:43:36.258Z] Copying: 763/1024 [MB] (31 MBps) [2024-11-26T17:43:37.197Z] Copying: 794/1024 [MB] (31 MBps) [2024-11-26T17:43:38.135Z] Copying: 825/1024 [MB] (31 MBps) [2024-11-26T17:43:39.072Z] Copying: 856/1024 [MB] (30 MBps) [2024-11-26T17:43:40.009Z] Copying: 886/1024 [MB] (30 MBps) [2024-11-26T17:43:40.949Z] Copying: 916/1024 [MB] (30 MBps) [2024-11-26T17:43:42.328Z] Copying: 947/1024 [MB] (30 MBps) [2024-11-26T17:43:43.266Z] Copying: 978/1024 [MB] (31 MBps) [2024-11-26T17:43:43.526Z] Copying: 1008/1024 [MB] (30 MBps) [2024-11-26T17:43:44.095Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-26 17:43:43.786414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.786643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:06.649 [2024-11-26 17:43:43.787066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:06.649 [2024-11-26 17:43:43.787104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.787176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:06.649 [2024-11-26 17:43:43.793423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.793526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:06.649 [2024-11-26 17:43:43.793580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.206 ms 00:47:06.649 [2024-11-26 17:43:43.793625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.793920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.793969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:06.649 [2024-11-26 17:43:43.794005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:47:06.649 [2024-11-26 17:43:43.794037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.797402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.797463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:06.649 [2024-11-26 17:43:43.797490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:47:06.649 [2024-11-26 17:43:43.797537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.803716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.803793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:06.649 [2024-11-26 17:43:43.803820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:47:06.649 [2024-11-26 17:43:43.803840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.839488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.839562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:06.649 [2024-11-26 17:43:43.839575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.567 ms 00:47:06.649 [2024-11-26 17:43:43.839583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.858460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.858493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:06.649 [2024-11-26 17:43:43.858504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.874 ms 00:47:06.649 [2024-11-26 17:43:43.858511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.858691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.858707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:06.649 [2024-11-26 17:43:43.858716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:47:06.649 [2024-11-26 17:43:43.858724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.649 [2024-11-26 17:43:43.893162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.649 [2024-11-26 17:43:43.893232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:06.650 [2024-11-26 17:43:43.893245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.489 ms 00:47:06.650 [2024-11-26 17:43:43.893253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.650 [2024-11-26 17:43:43.928804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.650 [2024-11-26 17:43:43.928835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:06.650 [2024-11-26 17:43:43.928845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.597 ms 00:47:06.650 [2024-11-26 17:43:43.928852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.650 [2024-11-26 17:43:43.961786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.650 [2024-11-26 17:43:43.961816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:06.650 [2024-11-26 17:43:43.961826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.962 ms 00:47:06.650 [2024-11-26 17:43:43.961832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.650 [2024-11-26 17:43:43.994575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.650 [2024-11-26 17:43:43.994605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:06.650 [2024-11-26 17:43:43.994624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.732 ms 00:47:06.650 [2024-11-26 17:43:43.994630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.650 [2024-11-26 17:43:43.994665] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:06.650 [2024-11-26 17:43:43.994685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.994995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:06.650 [2024-11-26 17:43:43.995278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:06.651 [2024-11-26 17:43:43.995458] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:06.651 [2024-11-26 17:43:43.995466] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 577621b7-83e5-45f3-93b3-fbcbcb8e7851 00:47:06.651 [2024-11-26 17:43:43.995473] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:47:06.651 [2024-11-26 17:43:43.995480] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:47:06.651 [2024-11-26 17:43:43.995487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:47:06.651 [2024-11-26 17:43:43.995495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:47:06.651 [2024-11-26 17:43:43.995514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:06.651 [2024-11-26 17:43:43.995522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:06.651 [2024-11-26 17:43:43.995530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:06.651 [2024-11-26 17:43:43.995536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:06.651 [2024-11-26 17:43:43.995542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:06.651 [2024-11-26 17:43:43.995549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.651 [2024-11-26 17:43:43.995556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:06.651 [2024-11-26 17:43:43.995565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:47:06.651 [2024-11-26 17:43:43.995576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.651 [2024-11-26 17:43:44.015856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.651 [2024-11-26 17:43:44.015884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:06.651 [2024-11-26 17:43:44.015894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.265 ms 00:47:06.651 [2024-11-26 17:43:44.015901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.651 [2024-11-26 17:43:44.016494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:06.651 [2024-11-26 17:43:44.016502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:06.651 [2024-11-26 17:43:44.016515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:47:06.651 [2024-11-26 17:43:44.016522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.651 [2024-11-26 17:43:44.070848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.651 [2024-11-26 17:43:44.070883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:06.651 [2024-11-26 17:43:44.070894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.651 [2024-11-26 17:43:44.070902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.651 [2024-11-26 17:43:44.070970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.651 [2024-11-26 17:43:44.070979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:06.651 [2024-11-26 17:43:44.070991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.651 [2024-11-26 17:43:44.070998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.651 [2024-11-26 17:43:44.071066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.651 [2024-11-26 17:43:44.071077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:06.651 [2024-11-26 17:43:44.071086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.651 [2024-11-26 17:43:44.071094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.651 [2024-11-26 17:43:44.071111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.651 [2024-11-26 17:43:44.071119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:06.651 [2024-11-26 17:43:44.071126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.651 [2024-11-26 17:43:44.071139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.203555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.203700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:06.911 [2024-11-26 17:43:44.203717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.203742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:06.911 [2024-11-26 17:43:44.307396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:06.911 [2024-11-26 17:43:44.307525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:06.911 [2024-11-26 17:43:44.307589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:06.911 [2024-11-26 17:43:44.307778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:06.911 [2024-11-26 17:43:44.307844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:06.911 [2024-11-26 17:43:44.307918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.307971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:06.911 [2024-11-26 17:43:44.307981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:06.911 [2024-11-26 17:43:44.307989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:06.911 [2024-11-26 17:43:44.307997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:06.911 [2024-11-26 17:43:44.308131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.705 ms, result 0 00:47:08.292 00:47:08.292 00:47:08.292 17:43:45 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:47:09.673 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:47:09.673 17:43:47 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:47:09.937 [2024-11-26 17:43:47.132093] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:47:09.937 [2024-11-26 17:43:47.132214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80838 ] 00:47:09.937 [2024-11-26 17:43:47.307964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:10.225 [2024-11-26 17:43:47.438721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:10.511 [2024-11-26 17:43:47.845515] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:10.511 [2024-11-26 17:43:47.845681] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:10.803 [2024-11-26 17:43:48.004938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.803 [2024-11-26 17:43:48.005101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:10.803 [2024-11-26 17:43:48.005119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:10.803 [2024-11-26 17:43:48.005127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.803 [2024-11-26 17:43:48.005180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.803 [2024-11-26 17:43:48.005194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:10.803 [2024-11-26 17:43:48.005202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:47:10.803 [2024-11-26 17:43:48.005209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.803 [2024-11-26 17:43:48.005228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:10.803 [2024-11-26 17:43:48.006250] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:10.803 [2024-11-26 17:43:48.006270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.803 [2024-11-26 17:43:48.006278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:10.803 [2024-11-26 17:43:48.006287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:47:10.803 [2024-11-26 17:43:48.006296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.803 [2024-11-26 17:43:48.008759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:10.803 [2024-11-26 17:43:48.030000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.803 [2024-11-26 17:43:48.030038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:10.804 [2024-11-26 17:43:48.030052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.282 ms 00:47:10.804 [2024-11-26 17:43:48.030060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.030138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.030149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:10.804 [2024-11-26 17:43:48.030158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:47:10.804 [2024-11-26 17:43:48.030166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.042702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.042759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:10.804 [2024-11-26 17:43:48.042772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.494 ms 00:47:10.804 [2024-11-26 17:43:48.042785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.042868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.042879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:10.804 [2024-11-26 17:43:48.042888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:47:10.804 [2024-11-26 17:43:48.042895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.042950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.042960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:10.804 [2024-11-26 17:43:48.042968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:10.804 [2024-11-26 17:43:48.042976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.043007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:10.804 [2024-11-26 17:43:48.048605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.048716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:10.804 [2024-11-26 17:43:48.048734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.618 ms 00:47:10.804 [2024-11-26 17:43:48.048742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.048774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.048783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:10.804 [2024-11-26 17:43:48.048791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:10.804 [2024-11-26 17:43:48.048798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.048834] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:10.804 [2024-11-26 17:43:48.048859] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:10.804 [2024-11-26 17:43:48.048896] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:10.804 [2024-11-26 17:43:48.048915] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:10.804 [2024-11-26 17:43:48.049008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:10.804 [2024-11-26 17:43:48.049018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:10.804 [2024-11-26 17:43:48.049028] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:10.804 [2024-11-26 17:43:48.049038] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049047] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049056] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:10.804 [2024-11-26 17:43:48.049064] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:10.804 [2024-11-26 17:43:48.049075] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:10.804 [2024-11-26 17:43:48.049083] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:10.804 [2024-11-26 17:43:48.049091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.049100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:10.804 [2024-11-26 17:43:48.049108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:47:10.804 [2024-11-26 17:43:48.049115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.049184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.804 [2024-11-26 17:43:48.049192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:10.804 [2024-11-26 17:43:48.049200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:47:10.804 [2024-11-26 17:43:48.049208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.804 [2024-11-26 17:43:48.049306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:10.804 [2024-11-26 17:43:48.049320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:10.804 [2024-11-26 17:43:48.049328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:10.804 [2024-11-26 17:43:48.049352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:10.804 [2024-11-26 17:43:48.049376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:10.804 [2024-11-26 17:43:48.049390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:10.804 [2024-11-26 17:43:48.049397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:10.804 [2024-11-26 17:43:48.049404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:10.804 [2024-11-26 17:43:48.049422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:10.804 [2024-11-26 17:43:48.049429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:10.804 [2024-11-26 17:43:48.049445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:10.804 [2024-11-26 17:43:48.049459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:10.804 [2024-11-26 17:43:48.049480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:10.804 [2024-11-26 17:43:48.049500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:10.804 [2024-11-26 17:43:48.049520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:10.804 [2024-11-26 17:43:48.049539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:10.804 [2024-11-26 17:43:48.049559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:10.804 [2024-11-26 17:43:48.049572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:10.804 [2024-11-26 17:43:48.049578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:10.804 [2024-11-26 17:43:48.049584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:10.804 [2024-11-26 17:43:48.049591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:10.804 [2024-11-26 17:43:48.049598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:10.804 [2024-11-26 17:43:48.049604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:10.804 [2024-11-26 17:43:48.049635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:10.804 [2024-11-26 17:43:48.049642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049650] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:10.804 [2024-11-26 17:43:48.049659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:10.804 [2024-11-26 17:43:48.049667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:10.804 [2024-11-26 17:43:48.049682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:10.804 [2024-11-26 17:43:48.049689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:10.804 [2024-11-26 17:43:48.049712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:10.804 [2024-11-26 17:43:48.049719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:10.804 [2024-11-26 17:43:48.049726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:10.804 [2024-11-26 17:43:48.049733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:10.804 [2024-11-26 17:43:48.049741] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:10.805 [2024-11-26 17:43:48.049752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:10.805 [2024-11-26 17:43:48.049773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:10.805 [2024-11-26 17:43:48.049781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:10.805 [2024-11-26 17:43:48.049789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:10.805 [2024-11-26 17:43:48.049797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:10.805 [2024-11-26 17:43:48.049804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:10.805 [2024-11-26 17:43:48.049812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:10.805 [2024-11-26 17:43:48.049827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:10.805 [2024-11-26 17:43:48.049834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:10.805 [2024-11-26 17:43:48.049841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:10.805 [2024-11-26 17:43:48.049876] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:10.805 [2024-11-26 17:43:48.049885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:10.805 [2024-11-26 17:43:48.049899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:10.805 [2024-11-26 17:43:48.049907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:10.805 [2024-11-26 17:43:48.049914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:10.805 [2024-11-26 17:43:48.049923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.049932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:10.805 [2024-11-26 17:43:48.049939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:47:10.805 [2024-11-26 17:43:48.049949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.097272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.097318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:10.805 [2024-11-26 17:43:48.097330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.358 ms 00:47:10.805 [2024-11-26 17:43:48.097344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.097429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.097444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:10.805 [2024-11-26 17:43:48.097452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:47:10.805 [2024-11-26 17:43:48.097477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.174231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.174279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:10.805 [2024-11-26 17:43:48.174292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.813 ms 00:47:10.805 [2024-11-26 17:43:48.174301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.174353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.174367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:10.805 [2024-11-26 17:43:48.174375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:10.805 [2024-11-26 17:43:48.174383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.175223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.175243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:10.805 [2024-11-26 17:43:48.175253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:47:10.805 [2024-11-26 17:43:48.175261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.175391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.175403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:10.805 [2024-11-26 17:43:48.175417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:47:10.805 [2024-11-26 17:43:48.175425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.197803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.197843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:10.805 [2024-11-26 17:43:48.197854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.400 ms 00:47:10.805 [2024-11-26 17:43:48.197862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.805 [2024-11-26 17:43:48.217470] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:10.805 [2024-11-26 17:43:48.217519] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:10.805 [2024-11-26 17:43:48.217533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.805 [2024-11-26 17:43:48.217542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:10.805 [2024-11-26 17:43:48.217550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.581 ms 00:47:10.805 [2024-11-26 17:43:48.217558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.247002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.247039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:11.066 [2024-11-26 17:43:48.247051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.457 ms 00:47:11.066 [2024-11-26 17:43:48.247059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.264695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.264729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:11.066 [2024-11-26 17:43:48.264739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.600 ms 00:47:11.066 [2024-11-26 17:43:48.264746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.281695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.281724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:11.066 [2024-11-26 17:43:48.281734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.949 ms 00:47:11.066 [2024-11-26 17:43:48.281757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.282467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.282488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:11.066 [2024-11-26 17:43:48.282501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:47:11.066 [2024-11-26 17:43:48.282509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.373986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.374065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:11.066 [2024-11-26 17:43:48.374101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.630 ms 00:47:11.066 [2024-11-26 17:43:48.374110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.384557] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:11.066 [2024-11-26 17:43:48.388535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.388618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:11.066 [2024-11-26 17:43:48.388633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.398 ms 00:47:11.066 [2024-11-26 17:43:48.388642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.388769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.388780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:11.066 [2024-11-26 17:43:48.388794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:47:11.066 [2024-11-26 17:43:48.388802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.388883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.388893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:11.066 [2024-11-26 17:43:48.388902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:47:11.066 [2024-11-26 17:43:48.388910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.388930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.388939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:11.066 [2024-11-26 17:43:48.388947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:11.066 [2024-11-26 17:43:48.388954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.388996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:11.066 [2024-11-26 17:43:48.389006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.389014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:11.066 [2024-11-26 17:43:48.389023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:47:11.066 [2024-11-26 17:43:48.389031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.424406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.424442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:11.066 [2024-11-26 17:43:48.424458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.424 ms 00:47:11.066 [2024-11-26 17:43:48.424482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.424560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.066 [2024-11-26 17:43:48.424570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:11.066 [2024-11-26 17:43:48.424579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:47:11.066 [2024-11-26 17:43:48.424587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.066 [2024-11-26 17:43:48.426184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.465 ms, result 0 00:47:12.003  [2024-11-26T17:43:50.827Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T17:43:51.766Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-26T17:43:52.703Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-26T17:43:53.645Z] Copying: 112/1024 [MB] (28 MBps) [2024-11-26T17:43:54.582Z] Copying: 140/1024 [MB] (28 MBps) [2024-11-26T17:43:55.520Z] Copying: 169/1024 [MB] (28 MBps) [2024-11-26T17:43:56.458Z] Copying: 197/1024 [MB] (28 MBps) [2024-11-26T17:43:57.839Z] Copying: 225/1024 [MB] (28 MBps) [2024-11-26T17:43:58.776Z] Copying: 254/1024 [MB] (28 MBps) [2024-11-26T17:43:59.714Z] Copying: 283/1024 [MB] (28 MBps) [2024-11-26T17:44:00.649Z] Copying: 311/1024 [MB] (28 MBps) [2024-11-26T17:44:01.585Z] Copying: 340/1024 [MB] (28 MBps) [2024-11-26T17:44:02.523Z] Copying: 368/1024 [MB] (28 MBps) [2024-11-26T17:44:03.460Z] Copying: 397/1024 [MB] (28 MBps) [2024-11-26T17:44:04.841Z] Copying: 425/1024 [MB] (28 MBps) [2024-11-26T17:44:05.410Z] Copying: 453/1024 [MB] (28 MBps) [2024-11-26T17:44:06.794Z] Copying: 482/1024 [MB] (28 MBps) [2024-11-26T17:44:07.741Z] Copying: 511/1024 [MB] (28 MBps) [2024-11-26T17:44:08.679Z] Copying: 539/1024 [MB] (28 MBps) [2024-11-26T17:44:09.617Z] Copying: 568/1024 [MB] (28 MBps) [2024-11-26T17:44:10.554Z] Copying: 596/1024 [MB] (28 MBps) [2024-11-26T17:44:11.492Z] Copying: 624/1024 [MB] (28 MBps) [2024-11-26T17:44:12.430Z] Copying: 653/1024 [MB] (28 MBps) [2024-11-26T17:44:13.810Z] Copying: 682/1024 [MB] (28 MBps) [2024-11-26T17:44:14.749Z] Copying: 710/1024 [MB] (28 MBps) [2024-11-26T17:44:15.686Z] Copying: 739/1024 [MB] (28 MBps) [2024-11-26T17:44:16.625Z] Copying: 767/1024 [MB] (28 MBps) [2024-11-26T17:44:17.562Z] Copying: 796/1024 [MB] (28 MBps) [2024-11-26T17:44:18.501Z] Copying: 824/1024 [MB] (28 MBps) [2024-11-26T17:44:19.438Z] Copying: 853/1024 [MB] (28 MBps) [2024-11-26T17:44:20.816Z] Copying: 881/1024 [MB] (28 MBps) [2024-11-26T17:44:21.385Z] Copying: 910/1024 [MB] (28 MBps) [2024-11-26T17:44:22.779Z] Copying: 938/1024 [MB] (28 MBps) [2024-11-26T17:44:23.716Z] Copying: 967/1024 [MB] (28 MBps) [2024-11-26T17:44:24.655Z] Copying: 995/1024 [MB] (28 MBps) [2024-11-26T17:44:25.225Z] Copying: 1023/1024 [MB] (27 MBps) [2024-11-26T17:44:25.225Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 17:44:25.140278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:47.779 [2024-11-26 17:44:25.140352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:47.779 [2024-11-26 17:44:25.140376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:47.779 [2024-11-26 17:44:25.140386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:47.779 [2024-11-26 17:44:25.142601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:47.779 [2024-11-26 17:44:25.149843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:47.779 [2024-11-26 17:44:25.149879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:47.779 [2024-11-26 17:44:25.149891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.209 ms 00:47:47.779 [2024-11-26 17:44:25.149900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:47.779 [2024-11-26 17:44:25.160246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:47.779 [2024-11-26 17:44:25.160284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:47.779 [2024-11-26 17:44:25.160297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.491 ms 00:47:47.779 [2024-11-26 17:44:25.160328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:47.779 [2024-11-26 17:44:25.183169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:47.779 [2024-11-26 17:44:25.183210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:47.779 [2024-11-26 17:44:25.183224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.868 ms 00:47:47.779 [2024-11-26 17:44:25.183235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:47.779 [2024-11-26 17:44:25.188269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:47.779 [2024-11-26 17:44:25.188296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:47.779 [2024-11-26 17:44:25.188305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:47:47.779 [2024-11-26 17:44:25.188335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:47.779 [2024-11-26 17:44:25.223983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:47.779 [2024-11-26 17:44:25.224027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:47.779 [2024-11-26 17:44:25.224053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.662 ms 00:47:47.779 [2024-11-26 17:44:25.224061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.039 [2024-11-26 17:44:25.244327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.039 [2024-11-26 17:44:25.244358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:48.039 [2024-11-26 17:44:25.244369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.271 ms 00:47:48.039 [2024-11-26 17:44:25.244377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.039 [2024-11-26 17:44:25.348145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.039 [2024-11-26 17:44:25.348208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:48.039 [2024-11-26 17:44:25.348222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.933 ms 00:47:48.039 [2024-11-26 17:44:25.348231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.039 [2024-11-26 17:44:25.383404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.039 [2024-11-26 17:44:25.383494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:48.039 [2024-11-26 17:44:25.383507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.223 ms 00:47:48.039 [2024-11-26 17:44:25.383531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.039 [2024-11-26 17:44:25.417265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.039 [2024-11-26 17:44:25.417331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:48.039 [2024-11-26 17:44:25.417361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.765 ms 00:47:48.039 [2024-11-26 17:44:25.417369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.039 [2024-11-26 17:44:25.450539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.039 [2024-11-26 17:44:25.450571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:48.039 [2024-11-26 17:44:25.450581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.201 ms 00:47:48.039 [2024-11-26 17:44:25.450589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.300 [2024-11-26 17:44:25.486082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.300 [2024-11-26 17:44:25.486162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:48.300 [2024-11-26 17:44:25.486177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.422 ms 00:47:48.300 [2024-11-26 17:44:25.486185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.300 [2024-11-26 17:44:25.486217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:48.300 [2024-11-26 17:44:25.486231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108032 / 261120 wr_cnt: 1 state: open 00:47:48.300 [2024-11-26 17:44:25.486241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:48.300 [2024-11-26 17:44:25.486816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.486994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:48.301 [2024-11-26 17:44:25.487105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:48.301 [2024-11-26 17:44:25.487113] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 577621b7-83e5-45f3-93b3-fbcbcb8e7851 00:47:48.301 [2024-11-26 17:44:25.487122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108032 00:47:48.301 [2024-11-26 17:44:25.487130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108992 00:47:48.301 [2024-11-26 17:44:25.487138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108032 00:47:48.301 [2024-11-26 17:44:25.487145] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:47:48.301 [2024-11-26 17:44:25.487170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:48.301 [2024-11-26 17:44:25.487179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:48.301 [2024-11-26 17:44:25.487187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:48.301 [2024-11-26 17:44:25.487194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:48.301 [2024-11-26 17:44:25.487200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:48.301 [2024-11-26 17:44:25.487208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.301 [2024-11-26 17:44:25.487217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:48.301 [2024-11-26 17:44:25.487225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:47:48.301 [2024-11-26 17:44:25.487232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.507389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.301 [2024-11-26 17:44:25.507417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:48.301 [2024-11-26 17:44:25.507432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.166 ms 00:47:48.301 [2024-11-26 17:44:25.507440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.508050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:48.301 [2024-11-26 17:44:25.508059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:48.301 [2024-11-26 17:44:25.508067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:47:48.301 [2024-11-26 17:44:25.508074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.560373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.301 [2024-11-26 17:44:25.560405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:48.301 [2024-11-26 17:44:25.560416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.301 [2024-11-26 17:44:25.560441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.560501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.301 [2024-11-26 17:44:25.560510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:48.301 [2024-11-26 17:44:25.560518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.301 [2024-11-26 17:44:25.560524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.560621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.301 [2024-11-26 17:44:25.560639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:48.301 [2024-11-26 17:44:25.560647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.301 [2024-11-26 17:44:25.560654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.560671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.301 [2024-11-26 17:44:25.560679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:48.301 [2024-11-26 17:44:25.560687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.301 [2024-11-26 17:44:25.560695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.301 [2024-11-26 17:44:25.688900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.301 [2024-11-26 17:44:25.688971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:48.301 [2024-11-26 17:44:25.688984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.301 [2024-11-26 17:44:25.688992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.561 [2024-11-26 17:44:25.790250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.561 [2024-11-26 17:44:25.790413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:48.561 [2024-11-26 17:44:25.790459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.561 [2024-11-26 17:44:25.790480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.561 [2024-11-26 17:44:25.790593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.561 [2024-11-26 17:44:25.790617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:48.561 [2024-11-26 17:44:25.790678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.561 [2024-11-26 17:44:25.790716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.561 [2024-11-26 17:44:25.790797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.561 [2024-11-26 17:44:25.790837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:48.561 [2024-11-26 17:44:25.790865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.561 [2024-11-26 17:44:25.790891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.561 [2024-11-26 17:44:25.791038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.561 [2024-11-26 17:44:25.791076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:48.561 [2024-11-26 17:44:25.791104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.561 [2024-11-26 17:44:25.791136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.561 [2024-11-26 17:44:25.791208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.561 [2024-11-26 17:44:25.791244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:48.561 [2024-11-26 17:44:25.791271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.561 [2024-11-26 17:44:25.791298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.561 [2024-11-26 17:44:25.791380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.561 [2024-11-26 17:44:25.791412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:48.561 [2024-11-26 17:44:25.791439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.561 [2024-11-26 17:44:25.791466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.562 [2024-11-26 17:44:25.791544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:48.562 [2024-11-26 17:44:25.791576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:48.562 [2024-11-26 17:44:25.791604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:48.562 [2024-11-26 17:44:25.791639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:48.562 [2024-11-26 17:44:25.791804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 655.125 ms, result 0 00:47:50.469 00:47:50.469 00:47:50.469 17:44:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:47:50.469 [2024-11-26 17:44:27.872577] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:47:50.469 [2024-11-26 17:44:27.872726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81247 ] 00:47:50.729 [2024-11-26 17:44:28.047595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:50.729 [2024-11-26 17:44:28.173257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:51.298 [2024-11-26 17:44:28.581155] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:51.298 [2024-11-26 17:44:28.581223] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:51.298 [2024-11-26 17:44:28.739975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.298 [2024-11-26 17:44:28.740030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:51.298 [2024-11-26 17:44:28.740044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:51.298 [2024-11-26 17:44:28.740053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.298 [2024-11-26 17:44:28.740099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.298 [2024-11-26 17:44:28.740112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:51.298 [2024-11-26 17:44:28.740120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:47:51.298 [2024-11-26 17:44:28.740127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.298 [2024-11-26 17:44:28.740146] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:51.298 [2024-11-26 17:44:28.741054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:51.298 [2024-11-26 17:44:28.741079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.298 [2024-11-26 17:44:28.741088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:51.298 [2024-11-26 17:44:28.741097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:47:51.298 [2024-11-26 17:44:28.741104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.558 [2024-11-26 17:44:28.743577] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:51.558 [2024-11-26 17:44:28.762911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.762944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:51.559 [2024-11-26 17:44:28.762957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.373 ms 00:47:51.559 [2024-11-26 17:44:28.762980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.763043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.763053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:51.559 [2024-11-26 17:44:28.763062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:47:51.559 [2024-11-26 17:44:28.763070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.775298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.775326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:51.559 [2024-11-26 17:44:28.775337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:47:51.559 [2024-11-26 17:44:28.775348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.775433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.775445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:51.559 [2024-11-26 17:44:28.775455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:47:51.559 [2024-11-26 17:44:28.775462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.775509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.775518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:51.559 [2024-11-26 17:44:28.775525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:51.559 [2024-11-26 17:44:28.775532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.775560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:51.559 [2024-11-26 17:44:28.780977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.781004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:51.559 [2024-11-26 17:44:28.781017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.436 ms 00:47:51.559 [2024-11-26 17:44:28.781025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.781053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.781061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:51.559 [2024-11-26 17:44:28.781069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:51.559 [2024-11-26 17:44:28.781076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.781109] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:51.559 [2024-11-26 17:44:28.781131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:51.559 [2024-11-26 17:44:28.781164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:51.559 [2024-11-26 17:44:28.781182] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:51.559 [2024-11-26 17:44:28.781267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:51.559 [2024-11-26 17:44:28.781276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:51.559 [2024-11-26 17:44:28.781286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:51.559 [2024-11-26 17:44:28.781295] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781311] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:51.559 [2024-11-26 17:44:28.781320] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:51.559 [2024-11-26 17:44:28.781330] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:51.559 [2024-11-26 17:44:28.781337] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:51.559 [2024-11-26 17:44:28.781345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.781352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:51.559 [2024-11-26 17:44:28.781360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:47:51.559 [2024-11-26 17:44:28.781367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.781431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.559 [2024-11-26 17:44:28.781439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:51.559 [2024-11-26 17:44:28.781447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:47:51.559 [2024-11-26 17:44:28.781461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.559 [2024-11-26 17:44:28.781575] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:51.559 [2024-11-26 17:44:28.781589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:51.559 [2024-11-26 17:44:28.781597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:51.559 [2024-11-26 17:44:28.781635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:51.559 [2024-11-26 17:44:28.781676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:51.559 [2024-11-26 17:44:28.781690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:51.559 [2024-11-26 17:44:28.781697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:51.559 [2024-11-26 17:44:28.781703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:51.559 [2024-11-26 17:44:28.781720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:51.559 [2024-11-26 17:44:28.781728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:51.559 [2024-11-26 17:44:28.781734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:51.559 [2024-11-26 17:44:28.781769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:51.559 [2024-11-26 17:44:28.781790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:51.559 [2024-11-26 17:44:28.781810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:51.559 [2024-11-26 17:44:28.781829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:51.559 [2024-11-26 17:44:28.781848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:51.559 [2024-11-26 17:44:28.781868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:51.559 [2024-11-26 17:44:28.781881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:51.559 [2024-11-26 17:44:28.781893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:51.559 [2024-11-26 17:44:28.781900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:51.559 [2024-11-26 17:44:28.781907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:51.559 [2024-11-26 17:44:28.781913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:51.559 [2024-11-26 17:44:28.781920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:51.559 [2024-11-26 17:44:28.781932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:51.559 [2024-11-26 17:44:28.781938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781944] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:51.559 [2024-11-26 17:44:28.781952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:51.559 [2024-11-26 17:44:28.781958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:51.559 [2024-11-26 17:44:28.781965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:51.559 [2024-11-26 17:44:28.781973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:51.559 [2024-11-26 17:44:28.781981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:51.559 [2024-11-26 17:44:28.781987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:51.559 [2024-11-26 17:44:28.781994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:51.559 [2024-11-26 17:44:28.782000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:51.559 [2024-11-26 17:44:28.782006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:51.559 [2024-11-26 17:44:28.782014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:51.560 [2024-11-26 17:44:28.782023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:51.560 [2024-11-26 17:44:28.782042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:51.560 [2024-11-26 17:44:28.782049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:51.560 [2024-11-26 17:44:28.782056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:51.560 [2024-11-26 17:44:28.782064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:51.560 [2024-11-26 17:44:28.782071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:51.560 [2024-11-26 17:44:28.782078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:51.560 [2024-11-26 17:44:28.782086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:51.560 [2024-11-26 17:44:28.782093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:51.560 [2024-11-26 17:44:28.782100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:51.560 [2024-11-26 17:44:28.782139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:51.560 [2024-11-26 17:44:28.782147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:51.560 [2024-11-26 17:44:28.782163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:51.560 [2024-11-26 17:44:28.782170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:51.560 [2024-11-26 17:44:28.782178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:51.560 [2024-11-26 17:44:28.782186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.782194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:51.560 [2024-11-26 17:44:28.782202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:47:51.560 [2024-11-26 17:44:28.782210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.829313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.829358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:51.560 [2024-11-26 17:44:28.829371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.144 ms 00:47:51.560 [2024-11-26 17:44:28.829383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.829470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.829495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:51.560 [2024-11-26 17:44:28.829504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:47:51.560 [2024-11-26 17:44:28.829512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.908951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.909086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:51.560 [2024-11-26 17:44:28.909103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.521 ms 00:47:51.560 [2024-11-26 17:44:28.909112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.909165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.909176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:51.560 [2024-11-26 17:44:28.909205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:51.560 [2024-11-26 17:44:28.909212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.910066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.910085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:51.560 [2024-11-26 17:44:28.910094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:47:51.560 [2024-11-26 17:44:28.910103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.910225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.910238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:51.560 [2024-11-26 17:44:28.910253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:47:51.560 [2024-11-26 17:44:28.910260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.932130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.932203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:51.560 [2024-11-26 17:44:28.932218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.891 ms 00:47:51.560 [2024-11-26 17:44:28.932242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.951607] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:47:51.560 [2024-11-26 17:44:28.951643] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:51.560 [2024-11-26 17:44:28.951656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.951664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:51.560 [2024-11-26 17:44:28.951674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:47:51.560 [2024-11-26 17:44:28.951681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.979434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.979506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:51.560 [2024-11-26 17:44:28.979521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.764 ms 00:47:51.560 [2024-11-26 17:44:28.979530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.560 [2024-11-26 17:44:28.996635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.560 [2024-11-26 17:44:28.996701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:51.560 [2024-11-26 17:44:28.996714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.078 ms 00:47:51.560 [2024-11-26 17:44:28.996721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.013599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.013639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:51.820 [2024-11-26 17:44:29.013649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.859 ms 00:47:51.820 [2024-11-26 17:44:29.013657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.014442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.014471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:51.820 [2024-11-26 17:44:29.014486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:47:51.820 [2024-11-26 17:44:29.014493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.106103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.106182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:51.820 [2024-11-26 17:44:29.106220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.764 ms 00:47:51.820 [2024-11-26 17:44:29.106228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.116943] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:51.820 [2024-11-26 17:44:29.121246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.121274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:51.820 [2024-11-26 17:44:29.121287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.987 ms 00:47:51.820 [2024-11-26 17:44:29.121295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.121397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.121409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:51.820 [2024-11-26 17:44:29.121422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:47:51.820 [2024-11-26 17:44:29.121430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.123606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.123647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:51.820 [2024-11-26 17:44:29.123656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:47:51.820 [2024-11-26 17:44:29.123664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.123729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.123739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:51.820 [2024-11-26 17:44:29.123747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:51.820 [2024-11-26 17:44:29.123755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.123799] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:51.820 [2024-11-26 17:44:29.123810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.123829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:51.820 [2024-11-26 17:44:29.123836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:51.820 [2024-11-26 17:44:29.123844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.159036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.159104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:51.820 [2024-11-26 17:44:29.159157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.243 ms 00:47:51.820 [2024-11-26 17:44:29.159177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.159284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.820 [2024-11-26 17:44:29.159311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:51.820 [2024-11-26 17:44:29.159346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:47:51.820 [2024-11-26 17:44:29.159383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.820 [2024-11-26 17:44:29.160909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.169 ms, result 0 00:47:53.207  [2024-11-26T17:44:31.593Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T17:44:32.531Z] Copying: 57/1024 [MB] (30 MBps) [2024-11-26T17:44:33.475Z] Copying: 87/1024 [MB] (30 MBps) [2024-11-26T17:44:34.415Z] Copying: 118/1024 [MB] (30 MBps) [2024-11-26T17:44:35.353Z] Copying: 149/1024 [MB] (30 MBps) [2024-11-26T17:44:36.734Z] Copying: 181/1024 [MB] (31 MBps) [2024-11-26T17:44:37.674Z] Copying: 211/1024 [MB] (30 MBps) [2024-11-26T17:44:38.613Z] Copying: 241/1024 [MB] (30 MBps) [2024-11-26T17:44:39.553Z] Copying: 271/1024 [MB] (30 MBps) [2024-11-26T17:44:40.492Z] Copying: 302/1024 [MB] (30 MBps) [2024-11-26T17:44:41.431Z] Copying: 332/1024 [MB] (30 MBps) [2024-11-26T17:44:42.367Z] Copying: 363/1024 [MB] (30 MBps) [2024-11-26T17:44:43.744Z] Copying: 393/1024 [MB] (30 MBps) [2024-11-26T17:44:44.339Z] Copying: 424/1024 [MB] (30 MBps) [2024-11-26T17:44:45.727Z] Copying: 455/1024 [MB] (30 MBps) [2024-11-26T17:44:46.665Z] Copying: 485/1024 [MB] (30 MBps) [2024-11-26T17:44:47.629Z] Copying: 515/1024 [MB] (30 MBps) [2024-11-26T17:44:48.562Z] Copying: 546/1024 [MB] (30 MBps) [2024-11-26T17:44:49.498Z] Copying: 578/1024 [MB] (31 MBps) [2024-11-26T17:44:50.436Z] Copying: 610/1024 [MB] (31 MBps) [2024-11-26T17:44:51.376Z] Copying: 641/1024 [MB] (31 MBps) [2024-11-26T17:44:52.356Z] Copying: 673/1024 [MB] (31 MBps) [2024-11-26T17:44:53.295Z] Copying: 705/1024 [MB] (32 MBps) [2024-11-26T17:44:54.674Z] Copying: 737/1024 [MB] (31 MBps) [2024-11-26T17:44:55.611Z] Copying: 769/1024 [MB] (31 MBps) [2024-11-26T17:44:56.550Z] Copying: 801/1024 [MB] (32 MBps) [2024-11-26T17:44:57.486Z] Copying: 833/1024 [MB] (31 MBps) [2024-11-26T17:44:58.422Z] Copying: 864/1024 [MB] (31 MBps) [2024-11-26T17:44:59.369Z] Copying: 896/1024 [MB] (31 MBps) [2024-11-26T17:45:00.314Z] Copying: 927/1024 [MB] (31 MBps) [2024-11-26T17:45:01.695Z] Copying: 959/1024 [MB] (31 MBps) [2024-11-26T17:45:02.635Z] Copying: 991/1024 [MB] (31 MBps) [2024-11-26T17:45:02.635Z] Copying: 1021/1024 [MB] (29 MBps) [2024-11-26T17:45:02.894Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-26 17:45:02.785292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.448 [2024-11-26 17:45:02.785493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:25.448 [2024-11-26 17:45:02.785568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:48:25.448 [2024-11-26 17:45:02.785597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.448 [2024-11-26 17:45:02.785713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:25.448 [2024-11-26 17:45:02.792173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.448 [2024-11-26 17:45:02.792267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:25.449 [2024-11-26 17:45:02.792286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.401 ms 00:48:25.449 [2024-11-26 17:45:02.792295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.449 [2024-11-26 17:45:02.792591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.449 [2024-11-26 17:45:02.792632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:25.449 [2024-11-26 17:45:02.793071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:48:25.449 [2024-11-26 17:45:02.793088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.449 [2024-11-26 17:45:02.798080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.449 [2024-11-26 17:45:02.798170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:25.449 [2024-11-26 17:45:02.798188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:48:25.449 [2024-11-26 17:45:02.798198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.449 [2024-11-26 17:45:02.805014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.449 [2024-11-26 17:45:02.805051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:25.449 [2024-11-26 17:45:02.805061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.778 ms 00:48:25.449 [2024-11-26 17:45:02.805076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.449 [2024-11-26 17:45:02.843181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.449 [2024-11-26 17:45:02.843223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:25.449 [2024-11-26 17:45:02.843236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.113 ms 00:48:25.449 [2024-11-26 17:45:02.843244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.449 [2024-11-26 17:45:02.863664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.449 [2024-11-26 17:45:02.863702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:25.449 [2024-11-26 17:45:02.863714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.420 ms 00:48:25.449 [2024-11-26 17:45:02.863739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.709 [2024-11-26 17:45:02.989196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.709 [2024-11-26 17:45:02.989244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:25.709 [2024-11-26 17:45:02.989260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.655 ms 00:48:25.709 [2024-11-26 17:45:02.989270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.709 [2024-11-26 17:45:03.026637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.709 [2024-11-26 17:45:03.026726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:25.709 [2024-11-26 17:45:03.026741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.420 ms 00:48:25.709 [2024-11-26 17:45:03.026749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.709 [2024-11-26 17:45:03.062349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.709 [2024-11-26 17:45:03.062386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:25.709 [2024-11-26 17:45:03.062399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.632 ms 00:48:25.709 [2024-11-26 17:45:03.062407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.709 [2024-11-26 17:45:03.098061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.709 [2024-11-26 17:45:03.098095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:25.709 [2024-11-26 17:45:03.098106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.687 ms 00:48:25.709 [2024-11-26 17:45:03.098113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.709 [2024-11-26 17:45:03.132461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.709 [2024-11-26 17:45:03.132494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:25.709 [2024-11-26 17:45:03.132505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.341 ms 00:48:25.709 [2024-11-26 17:45:03.132513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.709 [2024-11-26 17:45:03.132546] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:25.709 [2024-11-26 17:45:03.132562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:48:25.709 [2024-11-26 17:45:03.132573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:25.709 [2024-11-26 17:45:03.132658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.132996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:25.710 [2024-11-26 17:45:03.133412] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:25.710 [2024-11-26 17:45:03.133420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 577621b7-83e5-45f3-93b3-fbcbcb8e7851 00:48:25.711 [2024-11-26 17:45:03.133429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:48:25.711 [2024-11-26 17:45:03.133436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24000 00:48:25.711 [2024-11-26 17:45:03.133445] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23040 00:48:25.711 [2024-11-26 17:45:03.133454] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0417 00:48:25.711 [2024-11-26 17:45:03.133475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:25.711 [2024-11-26 17:45:03.133497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:25.711 [2024-11-26 17:45:03.133506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:25.711 [2024-11-26 17:45:03.133512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:25.711 [2024-11-26 17:45:03.133519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:25.711 [2024-11-26 17:45:03.133527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.711 [2024-11-26 17:45:03.133536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:25.711 [2024-11-26 17:45:03.133545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:48:25.711 [2024-11-26 17:45:03.133552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.971 [2024-11-26 17:45:03.154525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.972 [2024-11-26 17:45:03.154560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:25.972 [2024-11-26 17:45:03.154578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.982 ms 00:48:25.972 [2024-11-26 17:45:03.154587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.972 [2024-11-26 17:45:03.155258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:25.972 [2024-11-26 17:45:03.155280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:25.972 [2024-11-26 17:45:03.155289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:48:25.972 [2024-11-26 17:45:03.155297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.972 [2024-11-26 17:45:03.210194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:25.972 [2024-11-26 17:45:03.210240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:25.972 [2024-11-26 17:45:03.210251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:25.972 [2024-11-26 17:45:03.210260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.972 [2024-11-26 17:45:03.210324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:25.972 [2024-11-26 17:45:03.210333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:25.972 [2024-11-26 17:45:03.210342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:25.972 [2024-11-26 17:45:03.210350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.972 [2024-11-26 17:45:03.210444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:25.972 [2024-11-26 17:45:03.210457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:25.972 [2024-11-26 17:45:03.210472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:25.972 [2024-11-26 17:45:03.210479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.972 [2024-11-26 17:45:03.210497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:25.972 [2024-11-26 17:45:03.210506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:25.972 [2024-11-26 17:45:03.210515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:25.972 [2024-11-26 17:45:03.210523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:25.972 [2024-11-26 17:45:03.345615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:25.972 [2024-11-26 17:45:03.345741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:25.972 [2024-11-26 17:45:03.345780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:25.972 [2024-11-26 17:45:03.345802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.458885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:26.244 [2024-11-26 17:45:03.459065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.459224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:26.244 [2024-11-26 17:45:03.459290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.459402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:26.244 [2024-11-26 17:45:03.459462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.459595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:26.244 [2024-11-26 17:45:03.459641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.459700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:26.244 [2024-11-26 17:45:03.459720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.459772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:26.244 [2024-11-26 17:45:03.459790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.244 [2024-11-26 17:45:03.459849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:26.244 [2024-11-26 17:45:03.459859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:26.244 [2024-11-26 17:45:03.459867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:26.244 [2024-11-26 17:45:03.459874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:26.245 [2024-11-26 17:45:03.460012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 675.987 ms, result 0 00:48:27.181 00:48:27.181 00:48:27.181 17:45:04 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:29.153 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79808 00:48:29.153 17:45:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79808 ']' 00:48:29.153 17:45:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79808 00:48:29.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79808) - No such process 00:48:29.153 Process with pid 79808 is not found 00:48:29.153 Remove shared memory files 00:48:29.153 17:45:06 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79808 is not found' 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:29.153 17:45:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:48:29.153 ************************************ 00:48:29.153 END TEST ftl_restore 00:48:29.153 ************************************ 00:48:29.153 00:48:29.153 real 2m59.169s 00:48:29.153 user 2m46.463s 00:48:29.153 sys 0m14.760s 00:48:29.153 17:45:06 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:29.153 17:45:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:48:29.153 17:45:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:48:29.153 17:45:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:48:29.153 17:45:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:29.153 17:45:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:29.153 ************************************ 00:48:29.153 START TEST ftl_dirty_shutdown 00:48:29.153 ************************************ 00:48:29.153 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:48:29.414 * Looking for test storage... 00:48:29.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:29.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.414 --rc genhtml_branch_coverage=1 00:48:29.414 --rc genhtml_function_coverage=1 00:48:29.414 --rc genhtml_legend=1 00:48:29.414 --rc geninfo_all_blocks=1 00:48:29.414 --rc geninfo_unexecuted_blocks=1 00:48:29.414 00:48:29.414 ' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:29.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.414 --rc genhtml_branch_coverage=1 00:48:29.414 --rc genhtml_function_coverage=1 00:48:29.414 --rc genhtml_legend=1 00:48:29.414 --rc geninfo_all_blocks=1 00:48:29.414 --rc geninfo_unexecuted_blocks=1 00:48:29.414 00:48:29.414 ' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:29.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.414 --rc genhtml_branch_coverage=1 00:48:29.414 --rc genhtml_function_coverage=1 00:48:29.414 --rc genhtml_legend=1 00:48:29.414 --rc geninfo_all_blocks=1 00:48:29.414 --rc geninfo_unexecuted_blocks=1 00:48:29.414 00:48:29.414 ' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:29.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.414 --rc genhtml_branch_coverage=1 00:48:29.414 --rc genhtml_function_coverage=1 00:48:29.414 --rc genhtml_legend=1 00:48:29.414 --rc geninfo_all_blocks=1 00:48:29.414 --rc geninfo_unexecuted_blocks=1 00:48:29.414 00:48:29.414 ' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81697 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81697 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81697 ']' 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:29.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:29.414 17:45:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:29.675 [2024-11-26 17:45:06.964393] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:48:29.675 [2024-11-26 17:45:06.964664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81697 ] 00:48:29.934 [2024-11-26 17:45:07.147537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:29.934 [2024-11-26 17:45:07.292894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:31.312 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:31.571 { 00:48:31.571 "name": "nvme0n1", 00:48:31.571 "aliases": [ 00:48:31.571 "8bbaf543-1a50-40a7-95e0-7d42e145e4a3" 00:48:31.571 ], 00:48:31.571 "product_name": "NVMe disk", 00:48:31.571 "block_size": 4096, 00:48:31.571 "num_blocks": 1310720, 00:48:31.571 "uuid": "8bbaf543-1a50-40a7-95e0-7d42e145e4a3", 00:48:31.571 "numa_id": -1, 00:48:31.571 "assigned_rate_limits": { 00:48:31.571 "rw_ios_per_sec": 0, 00:48:31.571 "rw_mbytes_per_sec": 0, 00:48:31.571 "r_mbytes_per_sec": 0, 00:48:31.571 "w_mbytes_per_sec": 0 00:48:31.571 }, 00:48:31.571 "claimed": true, 00:48:31.571 "claim_type": "read_many_write_one", 00:48:31.571 "zoned": false, 00:48:31.571 "supported_io_types": { 00:48:31.571 "read": true, 00:48:31.571 "write": true, 00:48:31.571 "unmap": true, 00:48:31.571 "flush": true, 00:48:31.571 "reset": true, 00:48:31.571 "nvme_admin": true, 00:48:31.571 "nvme_io": true, 00:48:31.571 "nvme_io_md": false, 00:48:31.571 "write_zeroes": true, 00:48:31.571 "zcopy": false, 00:48:31.571 "get_zone_info": false, 00:48:31.571 "zone_management": false, 00:48:31.571 "zone_append": false, 00:48:31.571 "compare": true, 00:48:31.571 "compare_and_write": false, 00:48:31.571 "abort": true, 00:48:31.571 "seek_hole": false, 00:48:31.571 "seek_data": false, 00:48:31.571 "copy": true, 00:48:31.571 "nvme_iov_md": false 00:48:31.571 }, 00:48:31.571 "driver_specific": { 00:48:31.571 "nvme": [ 00:48:31.571 { 00:48:31.571 "pci_address": "0000:00:11.0", 00:48:31.571 "trid": { 00:48:31.571 "trtype": "PCIe", 00:48:31.571 "traddr": "0000:00:11.0" 00:48:31.571 }, 00:48:31.571 "ctrlr_data": { 00:48:31.571 "cntlid": 0, 00:48:31.571 "vendor_id": "0x1b36", 00:48:31.571 "model_number": "QEMU NVMe Ctrl", 00:48:31.571 "serial_number": "12341", 00:48:31.571 "firmware_revision": "8.0.0", 00:48:31.571 "subnqn": "nqn.2019-08.org.qemu:12341", 00:48:31.571 "oacs": { 00:48:31.571 "security": 0, 00:48:31.571 "format": 1, 00:48:31.571 "firmware": 0, 00:48:31.571 "ns_manage": 1 00:48:31.571 }, 00:48:31.571 "multi_ctrlr": false, 00:48:31.571 "ana_reporting": false 00:48:31.571 }, 00:48:31.571 "vs": { 00:48:31.571 "nvme_version": "1.4" 00:48:31.571 }, 00:48:31.571 "ns_data": { 00:48:31.571 "id": 1, 00:48:31.571 "can_share": false 00:48:31.571 } 00:48:31.571 } 00:48:31.571 ], 00:48:31.571 "mp_policy": "active_passive" 00:48:31.571 } 00:48:31.571 } 00:48:31.571 ]' 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:31.571 17:45:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:48:31.830 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a28a8876-e3e7-4f64-b2ac-bd89e608fb88 00:48:31.830 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:48:31.830 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a28a8876-e3e7-4f64-b2ac-bd89e608fb88 00:48:32.089 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:48:32.348 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=52e0b836-1818-4bd9-a822-90cf12fc6b0d 00:48:32.348 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 52e0b836-1818-4bd9-a822-90cf12fc6b0d 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:32.606 17:45:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:32.606 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:32.606 { 00:48:32.606 "name": "fbbecf69-1c16-43f0-be7c-753fea877a60", 00:48:32.606 "aliases": [ 00:48:32.606 "lvs/nvme0n1p0" 00:48:32.606 ], 00:48:32.606 "product_name": "Logical Volume", 00:48:32.606 "block_size": 4096, 00:48:32.606 "num_blocks": 26476544, 00:48:32.606 "uuid": "fbbecf69-1c16-43f0-be7c-753fea877a60", 00:48:32.606 "assigned_rate_limits": { 00:48:32.607 "rw_ios_per_sec": 0, 00:48:32.607 "rw_mbytes_per_sec": 0, 00:48:32.607 "r_mbytes_per_sec": 0, 00:48:32.607 "w_mbytes_per_sec": 0 00:48:32.607 }, 00:48:32.607 "claimed": false, 00:48:32.607 "zoned": false, 00:48:32.607 "supported_io_types": { 00:48:32.607 "read": true, 00:48:32.607 "write": true, 00:48:32.607 "unmap": true, 00:48:32.607 "flush": false, 00:48:32.607 "reset": true, 00:48:32.607 "nvme_admin": false, 00:48:32.607 "nvme_io": false, 00:48:32.607 "nvme_io_md": false, 00:48:32.607 "write_zeroes": true, 00:48:32.607 "zcopy": false, 00:48:32.607 "get_zone_info": false, 00:48:32.607 "zone_management": false, 00:48:32.607 "zone_append": false, 00:48:32.607 "compare": false, 00:48:32.607 "compare_and_write": false, 00:48:32.607 "abort": false, 00:48:32.607 "seek_hole": true, 00:48:32.607 "seek_data": true, 00:48:32.607 "copy": false, 00:48:32.607 "nvme_iov_md": false 00:48:32.607 }, 00:48:32.607 "driver_specific": { 00:48:32.607 "lvol": { 00:48:32.607 "lvol_store_uuid": "52e0b836-1818-4bd9-a822-90cf12fc6b0d", 00:48:32.607 "base_bdev": "nvme0n1", 00:48:32.607 "thin_provision": true, 00:48:32.607 "num_allocated_clusters": 0, 00:48:32.607 "snapshot": false, 00:48:32.607 "clone": false, 00:48:32.607 "esnap_clone": false 00:48:32.607 } 00:48:32.607 } 00:48:32.607 } 00:48:32.607 ]' 00:48:32.607 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:48:32.865 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:33.124 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:33.382 { 00:48:33.382 "name": "fbbecf69-1c16-43f0-be7c-753fea877a60", 00:48:33.382 "aliases": [ 00:48:33.382 "lvs/nvme0n1p0" 00:48:33.382 ], 00:48:33.382 "product_name": "Logical Volume", 00:48:33.382 "block_size": 4096, 00:48:33.382 "num_blocks": 26476544, 00:48:33.382 "uuid": "fbbecf69-1c16-43f0-be7c-753fea877a60", 00:48:33.382 "assigned_rate_limits": { 00:48:33.382 "rw_ios_per_sec": 0, 00:48:33.382 "rw_mbytes_per_sec": 0, 00:48:33.382 "r_mbytes_per_sec": 0, 00:48:33.382 "w_mbytes_per_sec": 0 00:48:33.382 }, 00:48:33.382 "claimed": false, 00:48:33.382 "zoned": false, 00:48:33.382 "supported_io_types": { 00:48:33.382 "read": true, 00:48:33.382 "write": true, 00:48:33.382 "unmap": true, 00:48:33.382 "flush": false, 00:48:33.382 "reset": true, 00:48:33.382 "nvme_admin": false, 00:48:33.382 "nvme_io": false, 00:48:33.382 "nvme_io_md": false, 00:48:33.382 "write_zeroes": true, 00:48:33.382 "zcopy": false, 00:48:33.382 "get_zone_info": false, 00:48:33.382 "zone_management": false, 00:48:33.382 "zone_append": false, 00:48:33.382 "compare": false, 00:48:33.382 "compare_and_write": false, 00:48:33.382 "abort": false, 00:48:33.382 "seek_hole": true, 00:48:33.382 "seek_data": true, 00:48:33.382 "copy": false, 00:48:33.382 "nvme_iov_md": false 00:48:33.382 }, 00:48:33.382 "driver_specific": { 00:48:33.382 "lvol": { 00:48:33.382 "lvol_store_uuid": "52e0b836-1818-4bd9-a822-90cf12fc6b0d", 00:48:33.382 "base_bdev": "nvme0n1", 00:48:33.382 "thin_provision": true, 00:48:33.382 "num_allocated_clusters": 0, 00:48:33.382 "snapshot": false, 00:48:33.382 "clone": false, 00:48:33.382 "esnap_clone": false 00:48:33.382 } 00:48:33.382 } 00:48:33.382 } 00:48:33.382 ]' 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:48:33.382 17:45:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:33.641 17:45:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbbecf69-1c16-43f0-be7c-753fea877a60 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:33.912 { 00:48:33.912 "name": "fbbecf69-1c16-43f0-be7c-753fea877a60", 00:48:33.912 "aliases": [ 00:48:33.912 "lvs/nvme0n1p0" 00:48:33.912 ], 00:48:33.912 "product_name": "Logical Volume", 00:48:33.912 "block_size": 4096, 00:48:33.912 "num_blocks": 26476544, 00:48:33.912 "uuid": "fbbecf69-1c16-43f0-be7c-753fea877a60", 00:48:33.912 "assigned_rate_limits": { 00:48:33.912 "rw_ios_per_sec": 0, 00:48:33.912 "rw_mbytes_per_sec": 0, 00:48:33.912 "r_mbytes_per_sec": 0, 00:48:33.912 "w_mbytes_per_sec": 0 00:48:33.912 }, 00:48:33.912 "claimed": false, 00:48:33.912 "zoned": false, 00:48:33.912 "supported_io_types": { 00:48:33.912 "read": true, 00:48:33.912 "write": true, 00:48:33.912 "unmap": true, 00:48:33.912 "flush": false, 00:48:33.912 "reset": true, 00:48:33.912 "nvme_admin": false, 00:48:33.912 "nvme_io": false, 00:48:33.912 "nvme_io_md": false, 00:48:33.912 "write_zeroes": true, 00:48:33.912 "zcopy": false, 00:48:33.912 "get_zone_info": false, 00:48:33.912 "zone_management": false, 00:48:33.912 "zone_append": false, 00:48:33.912 "compare": false, 00:48:33.912 "compare_and_write": false, 00:48:33.912 "abort": false, 00:48:33.912 "seek_hole": true, 00:48:33.912 "seek_data": true, 00:48:33.912 "copy": false, 00:48:33.912 "nvme_iov_md": false 00:48:33.912 }, 00:48:33.912 "driver_specific": { 00:48:33.912 "lvol": { 00:48:33.912 "lvol_store_uuid": "52e0b836-1818-4bd9-a822-90cf12fc6b0d", 00:48:33.912 "base_bdev": "nvme0n1", 00:48:33.912 "thin_provision": true, 00:48:33.912 "num_allocated_clusters": 0, 00:48:33.912 "snapshot": false, 00:48:33.912 "clone": false, 00:48:33.912 "esnap_clone": false 00:48:33.912 } 00:48:33.912 } 00:48:33.912 } 00:48:33.912 ]' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fbbecf69-1c16-43f0-be7c-753fea877a60 --l2p_dram_limit 10' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:48:33.912 17:45:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fbbecf69-1c16-43f0-be7c-753fea877a60 --l2p_dram_limit 10 -c nvc0n1p0 00:48:34.190 [2024-11-26 17:45:11.415078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.190 [2024-11-26 17:45:11.415269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:34.190 [2024-11-26 17:45:11.415296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:48:34.190 [2024-11-26 17:45:11.415305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.190 [2024-11-26 17:45:11.415400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.190 [2024-11-26 17:45:11.415413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:34.190 [2024-11-26 17:45:11.415425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:48:34.190 [2024-11-26 17:45:11.415434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.190 [2024-11-26 17:45:11.415459] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:34.190 [2024-11-26 17:45:11.416615] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:34.191 [2024-11-26 17:45:11.416648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.416657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:34.191 [2024-11-26 17:45:11.416668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:48:34.191 [2024-11-26 17:45:11.416676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.416755] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0a4ea280-838a-45d3-81ea-2a0df37d5369 00:48:34.191 [2024-11-26 17:45:11.419290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.419327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:48:34.191 [2024-11-26 17:45:11.419339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:48:34.191 [2024-11-26 17:45:11.419359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.433554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.433603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:34.191 [2024-11-26 17:45:11.433625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.136 ms 00:48:34.191 [2024-11-26 17:45:11.433637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.433776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.433797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:34.191 [2024-11-26 17:45:11.433807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:48:34.191 [2024-11-26 17:45:11.433823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.433890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.433903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:34.191 [2024-11-26 17:45:11.433914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:48:34.191 [2024-11-26 17:45:11.433925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.433953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:34.191 [2024-11-26 17:45:11.440053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.440081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:34.191 [2024-11-26 17:45:11.440096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:48:34.191 [2024-11-26 17:45:11.440104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.440142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.440150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:34.191 [2024-11-26 17:45:11.440161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:48:34.191 [2024-11-26 17:45:11.440168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.440202] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:48:34.191 [2024-11-26 17:45:11.440340] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:48:34.191 [2024-11-26 17:45:11.440356] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:34.191 [2024-11-26 17:45:11.440367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:48:34.191 [2024-11-26 17:45:11.440380] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440399] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:34.191 [2024-11-26 17:45:11.440410] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:34.191 [2024-11-26 17:45:11.440421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:48:34.191 [2024-11-26 17:45:11.440428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:48:34.191 [2024-11-26 17:45:11.440438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.440460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:34.191 [2024-11-26 17:45:11.440472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:48:34.191 [2024-11-26 17:45:11.440479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.440555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.191 [2024-11-26 17:45:11.440563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:34.191 [2024-11-26 17:45:11.440573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:48:34.191 [2024-11-26 17:45:11.440581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.191 [2024-11-26 17:45:11.440696] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:34.191 [2024-11-26 17:45:11.440708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:34.191 [2024-11-26 17:45:11.440719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:34.191 [2024-11-26 17:45:11.440743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:34.191 [2024-11-26 17:45:11.440768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:34.191 [2024-11-26 17:45:11.440783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:34.191 [2024-11-26 17:45:11.440792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:34.191 [2024-11-26 17:45:11.440802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:34.191 [2024-11-26 17:45:11.440808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:34.191 [2024-11-26 17:45:11.440817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:48:34.191 [2024-11-26 17:45:11.440823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:34.191 [2024-11-26 17:45:11.440841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:34.191 [2024-11-26 17:45:11.440868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:34.191 [2024-11-26 17:45:11.440890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:34.191 [2024-11-26 17:45:11.440914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:34.191 [2024-11-26 17:45:11.440936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:34.191 [2024-11-26 17:45:11.440950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:34.191 [2024-11-26 17:45:11.440962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:48:34.191 [2024-11-26 17:45:11.440968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:34.191 [2024-11-26 17:45:11.440976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:34.191 [2024-11-26 17:45:11.440983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:48:34.191 [2024-11-26 17:45:11.440991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:34.191 [2024-11-26 17:45:11.440997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:48:34.192 [2024-11-26 17:45:11.441006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:48:34.192 [2024-11-26 17:45:11.441012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:34.192 [2024-11-26 17:45:11.441023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:48:34.192 [2024-11-26 17:45:11.441030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:48:34.192 [2024-11-26 17:45:11.441038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:34.192 [2024-11-26 17:45:11.441044] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:34.192 [2024-11-26 17:45:11.441054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:34.192 [2024-11-26 17:45:11.441062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:34.192 [2024-11-26 17:45:11.441072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:34.192 [2024-11-26 17:45:11.441080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:34.192 [2024-11-26 17:45:11.441092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:34.192 [2024-11-26 17:45:11.441099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:34.192 [2024-11-26 17:45:11.441108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:34.192 [2024-11-26 17:45:11.441115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:34.192 [2024-11-26 17:45:11.441124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:34.192 [2024-11-26 17:45:11.441136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:34.192 [2024-11-26 17:45:11.441151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:34.192 [2024-11-26 17:45:11.441170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:48:34.192 [2024-11-26 17:45:11.441177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:48:34.192 [2024-11-26 17:45:11.441187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:48:34.192 [2024-11-26 17:45:11.441194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:48:34.192 [2024-11-26 17:45:11.441203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:48:34.192 [2024-11-26 17:45:11.441210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:48:34.192 [2024-11-26 17:45:11.441220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:48:34.192 [2024-11-26 17:45:11.441227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:48:34.192 [2024-11-26 17:45:11.441239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:48:34.192 [2024-11-26 17:45:11.441279] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:34.192 [2024-11-26 17:45:11.441289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:34.192 [2024-11-26 17:45:11.441307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:34.192 [2024-11-26 17:45:11.441314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:34.192 [2024-11-26 17:45:11.441325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:34.192 [2024-11-26 17:45:11.441332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:34.192 [2024-11-26 17:45:11.441344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:34.192 [2024-11-26 17:45:11.441351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:48:34.192 [2024-11-26 17:45:11.441361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:34.192 [2024-11-26 17:45:11.441402] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:48:34.192 [2024-11-26 17:45:11.441418] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:48:37.485 [2024-11-26 17:45:14.650330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.650498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:48:37.485 [2024-11-26 17:45:14.650545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3215.115 ms 00:48:37.485 [2024-11-26 17:45:14.650593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.703436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.703602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:37.485 [2024-11-26 17:45:14.703654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.512 ms 00:48:37.485 [2024-11-26 17:45:14.703679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.703899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.703943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:37.485 [2024-11-26 17:45:14.703987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:48:37.485 [2024-11-26 17:45:14.704025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.761128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.761252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:37.485 [2024-11-26 17:45:14.761288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.134 ms 00:48:37.485 [2024-11-26 17:45:14.761329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.761423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.761460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:37.485 [2024-11-26 17:45:14.761496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:48:37.485 [2024-11-26 17:45:14.761572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.762485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.762550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:37.485 [2024-11-26 17:45:14.762583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:48:37.485 [2024-11-26 17:45:14.762634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.762793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.762833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:37.485 [2024-11-26 17:45:14.762864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:48:37.485 [2024-11-26 17:45:14.762897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.789609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.789719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:37.485 [2024-11-26 17:45:14.789758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.728 ms 00:48:37.485 [2024-11-26 17:45:14.789791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.816847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:48:37.485 [2024-11-26 17:45:14.822115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.822182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:37.485 [2024-11-26 17:45:14.822221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.232 ms 00:48:37.485 [2024-11-26 17:45:14.822259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.902958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.903118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:48:37.485 [2024-11-26 17:45:14.903158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.779 ms 00:48:37.485 [2024-11-26 17:45:14.903180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.485 [2024-11-26 17:45:14.903435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.485 [2024-11-26 17:45:14.903507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:37.485 [2024-11-26 17:45:14.903542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:48:37.485 [2024-11-26 17:45:14.903568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.745 [2024-11-26 17:45:14.939863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.745 [2024-11-26 17:45:14.939948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:48:37.745 [2024-11-26 17:45:14.939983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.274 ms 00:48:37.745 [2024-11-26 17:45:14.940004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.745 [2024-11-26 17:45:14.975863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.745 [2024-11-26 17:45:14.975944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:48:37.745 [2024-11-26 17:45:14.975988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.852 ms 00:48:37.745 [2024-11-26 17:45:14.976008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.745 [2024-11-26 17:45:14.976804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.745 [2024-11-26 17:45:14.976864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:37.745 [2024-11-26 17:45:14.976904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:48:37.746 [2024-11-26 17:45:14.976926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.075596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.746 [2024-11-26 17:45:15.075758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:48:37.746 [2024-11-26 17:45:15.075784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.771 ms 00:48:37.746 [2024-11-26 17:45:15.075794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.114528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.746 [2024-11-26 17:45:15.114574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:48:37.746 [2024-11-26 17:45:15.114590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.706 ms 00:48:37.746 [2024-11-26 17:45:15.114598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.150248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.746 [2024-11-26 17:45:15.150286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:48:37.746 [2024-11-26 17:45:15.150300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.664 ms 00:48:37.746 [2024-11-26 17:45:15.150324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.185794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.746 [2024-11-26 17:45:15.185829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:37.746 [2024-11-26 17:45:15.185844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.495 ms 00:48:37.746 [2024-11-26 17:45:15.185852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.185898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.746 [2024-11-26 17:45:15.185907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:37.746 [2024-11-26 17:45:15.185923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:48:37.746 [2024-11-26 17:45:15.185931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.186050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:37.746 [2024-11-26 17:45:15.186064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:37.746 [2024-11-26 17:45:15.186075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:48:37.746 [2024-11-26 17:45:15.186083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:37.746 [2024-11-26 17:45:15.187565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3779.206 ms, result 0 00:48:38.006 { 00:48:38.006 "name": "ftl0", 00:48:38.006 "uuid": "0a4ea280-838a-45d3-81ea-2a0df37d5369" 00:48:38.006 } 00:48:38.006 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:48:38.006 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:48:38.006 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:48:38.006 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:48:38.006 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:48:38.266 /dev/nbd0 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:48:38.266 1+0 records in 00:48:38.266 1+0 records out 00:48:38.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563898 s, 7.3 MB/s 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:48:38.266 17:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:48:38.527 [2024-11-26 17:45:15.754329] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:48:38.527 [2024-11-26 17:45:15.754457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81845 ] 00:48:38.527 [2024-11-26 17:45:15.931575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:38.787 [2024-11-26 17:45:16.073251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:40.169  [2024-11-26T17:45:18.551Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-26T17:45:19.488Z] Copying: 443/1024 [MB] (218 MBps) [2024-11-26T17:45:20.867Z] Copying: 667/1024 [MB] (223 MBps) [2024-11-26T17:45:21.437Z] Copying: 875/1024 [MB] (208 MBps) [2024-11-26T17:45:22.816Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:48:45.370 00:48:45.370 17:45:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:47.278 17:45:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:48:47.278 [2024-11-26 17:45:24.330668] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:48:47.278 [2024-11-26 17:45:24.330818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81934 ] 00:48:47.278 [2024-11-26 17:45:24.516274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:47.278 [2024-11-26 17:45:24.625810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:48.657  [2024-11-26T17:45:27.041Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-26T17:45:27.979Z] Copying: 43/1024 [MB] (21 MBps) [2024-11-26T17:45:29.361Z] Copying: 62/1024 [MB] (19 MBps) [2024-11-26T17:45:29.930Z] Copying: 82/1024 [MB] (19 MBps) [2024-11-26T17:45:31.326Z] Copying: 101/1024 [MB] (19 MBps) [2024-11-26T17:45:32.265Z] Copying: 122/1024 [MB] (20 MBps) [2024-11-26T17:45:33.203Z] Copying: 143/1024 [MB] (20 MBps) [2024-11-26T17:45:34.140Z] Copying: 163/1024 [MB] (20 MBps) [2024-11-26T17:45:35.079Z] Copying: 184/1024 [MB] (20 MBps) [2024-11-26T17:45:36.017Z] Copying: 203/1024 [MB] (18 MBps) [2024-11-26T17:45:36.957Z] Copying: 222/1024 [MB] (19 MBps) [2024-11-26T17:45:38.338Z] Copying: 242/1024 [MB] (19 MBps) [2024-11-26T17:45:38.905Z] Copying: 261/1024 [MB] (19 MBps) [2024-11-26T17:45:40.298Z] Copying: 282/1024 [MB] (20 MBps) [2024-11-26T17:45:41.233Z] Copying: 303/1024 [MB] (20 MBps) [2024-11-26T17:45:42.172Z] Copying: 324/1024 [MB] (20 MBps) [2024-11-26T17:45:43.110Z] Copying: 345/1024 [MB] (20 MBps) [2024-11-26T17:45:44.049Z] Copying: 367/1024 [MB] (22 MBps) [2024-11-26T17:45:44.985Z] Copying: 387/1024 [MB] (19 MBps) [2024-11-26T17:45:45.926Z] Copying: 408/1024 [MB] (21 MBps) [2024-11-26T17:45:47.306Z] Copying: 429/1024 [MB] (20 MBps) [2024-11-26T17:45:48.251Z] Copying: 448/1024 [MB] (19 MBps) [2024-11-26T17:45:49.190Z] Copying: 467/1024 [MB] (18 MBps) [2024-11-26T17:45:50.142Z] Copying: 486/1024 [MB] (19 MBps) [2024-11-26T17:45:51.081Z] Copying: 507/1024 [MB] (20 MBps) [2024-11-26T17:45:52.018Z] Copying: 526/1024 [MB] (19 MBps) [2024-11-26T17:45:52.957Z] Copying: 546/1024 [MB] (19 MBps) [2024-11-26T17:45:53.895Z] Copying: 565/1024 [MB] (19 MBps) [2024-11-26T17:45:55.275Z] Copying: 585/1024 [MB] (19 MBps) [2024-11-26T17:45:56.215Z] Copying: 604/1024 [MB] (19 MBps) [2024-11-26T17:45:57.154Z] Copying: 623/1024 [MB] (18 MBps) [2024-11-26T17:45:58.096Z] Copying: 642/1024 [MB] (19 MBps) [2024-11-26T17:45:59.034Z] Copying: 662/1024 [MB] (20 MBps) [2024-11-26T17:45:59.970Z] Copying: 682/1024 [MB] (19 MBps) [2024-11-26T17:46:00.907Z] Copying: 701/1024 [MB] (19 MBps) [2024-11-26T17:46:02.290Z] Copying: 720/1024 [MB] (19 MBps) [2024-11-26T17:46:03.226Z] Copying: 739/1024 [MB] (18 MBps) [2024-11-26T17:46:04.164Z] Copying: 759/1024 [MB] (19 MBps) [2024-11-26T17:46:05.104Z] Copying: 779/1024 [MB] (20 MBps) [2024-11-26T17:46:06.042Z] Copying: 800/1024 [MB] (20 MBps) [2024-11-26T17:46:06.981Z] Copying: 820/1024 [MB] (19 MBps) [2024-11-26T17:46:07.947Z] Copying: 840/1024 [MB] (20 MBps) [2024-11-26T17:46:08.886Z] Copying: 860/1024 [MB] (19 MBps) [2024-11-26T17:46:10.262Z] Copying: 880/1024 [MB] (19 MBps) [2024-11-26T17:46:11.198Z] Copying: 900/1024 [MB] (20 MBps) [2024-11-26T17:46:12.136Z] Copying: 920/1024 [MB] (20 MBps) [2024-11-26T17:46:13.075Z] Copying: 941/1024 [MB] (20 MBps) [2024-11-26T17:46:14.013Z] Copying: 961/1024 [MB] (20 MBps) [2024-11-26T17:46:14.952Z] Copying: 981/1024 [MB] (19 MBps) [2024-11-26T17:46:15.890Z] Copying: 1000/1024 [MB] (19 MBps) [2024-11-26T17:46:16.149Z] Copying: 1021/1024 [MB] (20 MBps) [2024-11-26T17:46:17.527Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:49:40.081 00:49:40.081 17:46:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:49:40.081 17:46:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:49:40.081 17:46:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:49:40.341 [2024-11-26 17:46:17.620722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.620790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:40.342 [2024-11-26 17:46:17.620813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:49:40.342 [2024-11-26 17:46:17.620829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.620858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:40.342 [2024-11-26 17:46:17.625149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.625188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:40.342 [2024-11-26 17:46:17.625213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:49:40.342 [2024-11-26 17:46:17.625223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.627553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.627690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:40.342 [2024-11-26 17:46:17.627725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:49:40.342 [2024-11-26 17:46:17.627735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.644621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.644678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:40.342 [2024-11-26 17:46:17.644694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.880 ms 00:49:40.342 [2024-11-26 17:46:17.644704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.649756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.649792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:40.342 [2024-11-26 17:46:17.649807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:49:40.342 [2024-11-26 17:46:17.649816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.686103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.686147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:40.342 [2024-11-26 17:46:17.686164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.280 ms 00:49:40.342 [2024-11-26 17:46:17.686173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.707956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.708016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:40.342 [2024-11-26 17:46:17.708037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.775 ms 00:49:40.342 [2024-11-26 17:46:17.708047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.708195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.708207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:40.342 [2024-11-26 17:46:17.708220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:49:40.342 [2024-11-26 17:46:17.708230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.744198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.744240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:40.342 [2024-11-26 17:46:17.744256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.013 ms 00:49:40.342 [2024-11-26 17:46:17.744265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.342 [2024-11-26 17:46:17.781018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.342 [2024-11-26 17:46:17.781123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:40.342 [2024-11-26 17:46:17.781164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.774 ms 00:49:40.342 [2024-11-26 17:46:17.781176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.604 [2024-11-26 17:46:17.818218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.604 [2024-11-26 17:46:17.818266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:40.604 [2024-11-26 17:46:17.818284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.052 ms 00:49:40.604 [2024-11-26 17:46:17.818295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.604 [2024-11-26 17:46:17.854177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.604 [2024-11-26 17:46:17.854224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:40.604 [2024-11-26 17:46:17.854242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.836 ms 00:49:40.604 [2024-11-26 17:46:17.854251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.604 [2024-11-26 17:46:17.854300] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:40.604 [2024-11-26 17:46:17.854317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.854992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:40.604 [2024-11-26 17:46:17.855077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:40.605 [2024-11-26 17:46:17.855467] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:40.605 [2024-11-26 17:46:17.855479] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4ea280-838a-45d3-81ea-2a0df37d5369 00:49:40.605 [2024-11-26 17:46:17.855488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:49:40.605 [2024-11-26 17:46:17.855501] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:49:40.605 [2024-11-26 17:46:17.855515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:49:40.605 [2024-11-26 17:46:17.855527] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:49:40.605 [2024-11-26 17:46:17.855536] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:40.605 [2024-11-26 17:46:17.855547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:40.605 [2024-11-26 17:46:17.855557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:40.605 [2024-11-26 17:46:17.855567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:40.605 [2024-11-26 17:46:17.855576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:40.605 [2024-11-26 17:46:17.855588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.605 [2024-11-26 17:46:17.855597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:40.605 [2024-11-26 17:46:17.855621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:49:40.605 [2024-11-26 17:46:17.855632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.605 [2024-11-26 17:46:17.875878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.605 [2024-11-26 17:46:17.875920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:40.605 [2024-11-26 17:46:17.875937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.230 ms 00:49:40.605 [2024-11-26 17:46:17.875947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.605 [2024-11-26 17:46:17.876485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:40.605 [2024-11-26 17:46:17.876509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:40.605 [2024-11-26 17:46:17.876523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:49:40.605 [2024-11-26 17:46:17.876533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.605 [2024-11-26 17:46:17.943604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.605 [2024-11-26 17:46:17.943693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:40.605 [2024-11-26 17:46:17.943711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.605 [2024-11-26 17:46:17.943722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.605 [2024-11-26 17:46:17.943805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.605 [2024-11-26 17:46:17.943816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:40.605 [2024-11-26 17:46:17.943829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.605 [2024-11-26 17:46:17.943839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.605 [2024-11-26 17:46:17.943984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.605 [2024-11-26 17:46:17.944001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:40.605 [2024-11-26 17:46:17.944014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.605 [2024-11-26 17:46:17.944024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.605 [2024-11-26 17:46:17.944053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.605 [2024-11-26 17:46:17.944064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:40.605 [2024-11-26 17:46:17.944077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.605 [2024-11-26 17:46:17.944087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.066361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.066432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:40.866 [2024-11-26 17:46:18.066465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.066475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.162633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.162700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:40.866 [2024-11-26 17:46:18.162718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.162728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.162866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.162878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:40.866 [2024-11-26 17:46:18.162894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.162903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.162972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.162988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:40.866 [2024-11-26 17:46:18.163000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.163010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.163143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.163163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:40.866 [2024-11-26 17:46:18.163174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.163186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.163235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.163249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:40.866 [2024-11-26 17:46:18.163260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.163270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.163326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.163336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:40.866 [2024-11-26 17:46:18.163347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.163359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.866 [2024-11-26 17:46:18.163414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:40.866 [2024-11-26 17:46:18.163425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:40.866 [2024-11-26 17:46:18.163437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:40.866 [2024-11-26 17:46:18.163447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:40.867 [2024-11-26 17:46:18.163596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.893 ms, result 0 00:49:40.867 true 00:49:40.867 17:46:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81697 00:49:40.867 17:46:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81697 00:49:40.867 17:46:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:49:40.867 [2024-11-26 17:46:18.281172] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:49:40.867 [2024-11-26 17:46:18.281299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82476 ] 00:49:41.127 [2024-11-26 17:46:18.456834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:41.387 [2024-11-26 17:46:18.580200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:42.767  [2024-11-26T17:46:21.188Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-26T17:46:22.124Z] Copying: 446/1024 [MB] (228 MBps) [2024-11-26T17:46:23.066Z] Copying: 672/1024 [MB] (225 MBps) [2024-11-26T17:46:23.636Z] Copying: 896/1024 [MB] (224 MBps) [2024-11-26T17:46:24.577Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:49:47.131 00:49:47.391 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81697 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:49:47.391 17:46:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:47.391 [2024-11-26 17:46:24.668845] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:49:47.391 [2024-11-26 17:46:24.668951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82541 ] 00:49:47.652 [2024-11-26 17:46:24.840911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:47.652 [2024-11-26 17:46:24.955332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:47.912 [2024-11-26 17:46:25.325185] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:47.912 [2024-11-26 17:46:25.325260] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:48.171 [2024-11-26 17:46:25.391470] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:49:48.171 [2024-11-26 17:46:25.391914] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:49:48.171 [2024-11-26 17:46:25.392182] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:49:48.433 [2024-11-26 17:46:25.686506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.686639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:48.433 [2024-11-26 17:46:25.686676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:48.433 [2024-11-26 17:46:25.686691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.686750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.686763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:48.433 [2024-11-26 17:46:25.686773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:49:48.433 [2024-11-26 17:46:25.686783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.686807] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:48.433 [2024-11-26 17:46:25.687852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:48.433 [2024-11-26 17:46:25.687873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.687883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:48.433 [2024-11-26 17:46:25.687894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:49:48.433 [2024-11-26 17:46:25.687903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.689430] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:49:48.433 [2024-11-26 17:46:25.708828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.708920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:49:48.433 [2024-11-26 17:46:25.708939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 00:49:48.433 [2024-11-26 17:46:25.708951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.709020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.709032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:49:48.433 [2024-11-26 17:46:25.709042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:49:48.433 [2024-11-26 17:46:25.709069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.716258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.716357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:48.433 [2024-11-26 17:46:25.716375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.118 ms 00:49:48.433 [2024-11-26 17:46:25.716385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.716473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.716488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:48.433 [2024-11-26 17:46:25.716500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:49:48.433 [2024-11-26 17:46:25.716509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.716568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.716579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:48.433 [2024-11-26 17:46:25.716590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:49:48.433 [2024-11-26 17:46:25.716600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.716648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:48.433 [2024-11-26 17:46:25.721824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.721862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:48.433 [2024-11-26 17:46:25.721875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.195 ms 00:49:48.433 [2024-11-26 17:46:25.721884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.721917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.721929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:48.433 [2024-11-26 17:46:25.721939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:49:48.433 [2024-11-26 17:46:25.721949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.722006] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:49:48.433 [2024-11-26 17:46:25.722031] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:49:48.433 [2024-11-26 17:46:25.722067] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:49:48.433 [2024-11-26 17:46:25.722085] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:49:48.433 [2024-11-26 17:46:25.722177] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:48.433 [2024-11-26 17:46:25.722190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:48.433 [2024-11-26 17:46:25.722202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:48.433 [2024-11-26 17:46:25.722218] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:48.433 [2024-11-26 17:46:25.722229] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:48.433 [2024-11-26 17:46:25.722239] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:49:48.433 [2024-11-26 17:46:25.722249] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:48.433 [2024-11-26 17:46:25.722259] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:48.433 [2024-11-26 17:46:25.722268] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:48.433 [2024-11-26 17:46:25.722278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.722287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:48.433 [2024-11-26 17:46:25.722297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:49:48.433 [2024-11-26 17:46:25.722306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.433 [2024-11-26 17:46:25.722380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.433 [2024-11-26 17:46:25.722394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:48.433 [2024-11-26 17:46:25.722404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:49:48.433 [2024-11-26 17:46:25.722414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.434 [2024-11-26 17:46:25.722522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:48.434 [2024-11-26 17:46:25.722538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:48.434 [2024-11-26 17:46:25.722549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:48.434 [2024-11-26 17:46:25.722577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:48.434 [2024-11-26 17:46:25.722626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:48.434 [2024-11-26 17:46:25.722658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:48.434 [2024-11-26 17:46:25.722668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:49:48.434 [2024-11-26 17:46:25.722677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:48.434 [2024-11-26 17:46:25.722686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:48.434 [2024-11-26 17:46:25.722695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:49:48.434 [2024-11-26 17:46:25.722704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:48.434 [2024-11-26 17:46:25.722741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:48.434 [2024-11-26 17:46:25.722768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:48.434 [2024-11-26 17:46:25.722820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:48.434 [2024-11-26 17:46:25.722846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:48.434 [2024-11-26 17:46:25.722871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:48.434 [2024-11-26 17:46:25.722888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:48.434 [2024-11-26 17:46:25.722897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:48.434 [2024-11-26 17:46:25.722913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:48.434 [2024-11-26 17:46:25.722922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:49:48.434 [2024-11-26 17:46:25.722930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:48.434 [2024-11-26 17:46:25.722938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:48.434 [2024-11-26 17:46:25.722947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:49:48.434 [2024-11-26 17:46:25.722956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:48.434 [2024-11-26 17:46:25.722973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:49:48.434 [2024-11-26 17:46:25.722981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.722990] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:48.434 [2024-11-26 17:46:25.723000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:48.434 [2024-11-26 17:46:25.723014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:48.434 [2024-11-26 17:46:25.723024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:48.434 [2024-11-26 17:46:25.723034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:48.434 [2024-11-26 17:46:25.723044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:48.434 [2024-11-26 17:46:25.723053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:48.434 [2024-11-26 17:46:25.723062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:48.434 [2024-11-26 17:46:25.723070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:48.434 [2024-11-26 17:46:25.723079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:48.434 [2024-11-26 17:46:25.723089] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:48.434 [2024-11-26 17:46:25.723118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:49:48.434 [2024-11-26 17:46:25.723141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:49:48.434 [2024-11-26 17:46:25.723151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:49:48.434 [2024-11-26 17:46:25.723161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:49:48.434 [2024-11-26 17:46:25.723171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:49:48.434 [2024-11-26 17:46:25.723180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:49:48.434 [2024-11-26 17:46:25.723190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:49:48.434 [2024-11-26 17:46:25.723200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:49:48.434 [2024-11-26 17:46:25.723210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:49:48.434 [2024-11-26 17:46:25.723220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:49:48.434 [2024-11-26 17:46:25.723269] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:48.434 [2024-11-26 17:46:25.723291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:48.434 [2024-11-26 17:46:25.723314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:48.434 [2024-11-26 17:46:25.723324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:48.434 [2024-11-26 17:46:25.723334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:48.434 [2024-11-26 17:46:25.723346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.434 [2024-11-26 17:46:25.723357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:48.434 [2024-11-26 17:46:25.723369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:49:48.434 [2024-11-26 17:46:25.723379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.434 [2024-11-26 17:46:25.762653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.434 [2024-11-26 17:46:25.762709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:48.434 [2024-11-26 17:46:25.762724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.285 ms 00:49:48.434 [2024-11-26 17:46:25.762734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.434 [2024-11-26 17:46:25.762835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.434 [2024-11-26 17:46:25.762847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:48.434 [2024-11-26 17:46:25.762857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:49:48.434 [2024-11-26 17:46:25.762866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.434 [2024-11-26 17:46:25.837475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.434 [2024-11-26 17:46:25.837557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:48.434 [2024-11-26 17:46:25.837577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.672 ms 00:49:48.434 [2024-11-26 17:46:25.837587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.434 [2024-11-26 17:46:25.837688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.434 [2024-11-26 17:46:25.837700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:48.434 [2024-11-26 17:46:25.837712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:48.434 [2024-11-26 17:46:25.837721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.434 [2024-11-26 17:46:25.838288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.434 [2024-11-26 17:46:25.838312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:48.434 [2024-11-26 17:46:25.838325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:49:48.434 [2024-11-26 17:46:25.838344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.435 [2024-11-26 17:46:25.838478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.435 [2024-11-26 17:46:25.838494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:48.435 [2024-11-26 17:46:25.838506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:49:48.435 [2024-11-26 17:46:25.838517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.435 [2024-11-26 17:46:25.857790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.435 [2024-11-26 17:46:25.857835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:48.435 [2024-11-26 17:46:25.857849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.281 ms 00:49:48.435 [2024-11-26 17:46:25.857859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:25.877244] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:49:48.705 [2024-11-26 17:46:25.877283] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:49:48.705 [2024-11-26 17:46:25.877298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:25.877308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:49:48.705 [2024-11-26 17:46:25.877319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.345 ms 00:49:48.705 [2024-11-26 17:46:25.877328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:25.907359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:25.907470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:49:48.705 [2024-11-26 17:46:25.907489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.037 ms 00:49:48.705 [2024-11-26 17:46:25.907500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:25.925751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:25.925842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:49:48.705 [2024-11-26 17:46:25.925860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.249 ms 00:49:48.705 [2024-11-26 17:46:25.925870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:25.944508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:25.944551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:49:48.705 [2024-11-26 17:46:25.944565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.627 ms 00:49:48.705 [2024-11-26 17:46:25.944575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:25.945390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:25.945426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:48.705 [2024-11-26 17:46:25.945439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:49:48.705 [2024-11-26 17:46:25.945450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.035039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.035201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:49:48.705 [2024-11-26 17:46:26.035222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.735 ms 00:49:48.705 [2024-11-26 17:46:26.035233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.046301] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:49:48.705 [2024-11-26 17:46:26.049720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.049755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:48.705 [2024-11-26 17:46:26.049771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.425 ms 00:49:48.705 [2024-11-26 17:46:26.049787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.049896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.049908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:49:48.705 [2024-11-26 17:46:26.049919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:49:48.705 [2024-11-26 17:46:26.049928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.050024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.050036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:48.705 [2024-11-26 17:46:26.050045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:49:48.705 [2024-11-26 17:46:26.050054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.050085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.050096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:48.705 [2024-11-26 17:46:26.050105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:49:48.705 [2024-11-26 17:46:26.050115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.050149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:49:48.705 [2024-11-26 17:46:26.050161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.050169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:49:48.705 [2024-11-26 17:46:26.050179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:49:48.705 [2024-11-26 17:46:26.050192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.086165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.086212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:48.705 [2024-11-26 17:46:26.086227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.017 ms 00:49:48.705 [2024-11-26 17:46:26.086238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.086323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:48.705 [2024-11-26 17:46:26.086334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:48.705 [2024-11-26 17:46:26.086346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:49:48.705 [2024-11-26 17:46:26.086355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:48.705 [2024-11-26 17:46:26.087518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.288 ms, result 0 00:49:49.660  [2024-11-26T17:46:28.488Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-26T17:46:29.428Z] Copying: 55/1024 [MB] (26 MBps) [2024-11-26T17:46:30.364Z] Copying: 82/1024 [MB] (26 MBps) [2024-11-26T17:46:31.301Z] Copying: 108/1024 [MB] (26 MBps) [2024-11-26T17:46:32.239Z] Copying: 137/1024 [MB] (28 MBps) [2024-11-26T17:46:33.175Z] Copying: 166/1024 [MB] (28 MBps) [2024-11-26T17:46:34.113Z] Copying: 195/1024 [MB] (28 MBps) [2024-11-26T17:46:35.492Z] Copying: 224/1024 [MB] (29 MBps) [2024-11-26T17:46:36.430Z] Copying: 253/1024 [MB] (29 MBps) [2024-11-26T17:46:37.369Z] Copying: 282/1024 [MB] (29 MBps) [2024-11-26T17:46:38.312Z] Copying: 311/1024 [MB] (28 MBps) [2024-11-26T17:46:39.253Z] Copying: 339/1024 [MB] (28 MBps) [2024-11-26T17:46:40.192Z] Copying: 368/1024 [MB] (28 MBps) [2024-11-26T17:46:41.138Z] Copying: 396/1024 [MB] (28 MBps) [2024-11-26T17:46:42.077Z] Copying: 425/1024 [MB] (28 MBps) [2024-11-26T17:46:43.458Z] Copying: 454/1024 [MB] (29 MBps) [2024-11-26T17:46:44.398Z] Copying: 483/1024 [MB] (28 MBps) [2024-11-26T17:46:45.340Z] Copying: 512/1024 [MB] (29 MBps) [2024-11-26T17:46:46.280Z] Copying: 541/1024 [MB] (28 MBps) [2024-11-26T17:46:47.220Z] Copying: 571/1024 [MB] (29 MBps) [2024-11-26T17:46:48.160Z] Copying: 600/1024 [MB] (29 MBps) [2024-11-26T17:46:49.098Z] Copying: 629/1024 [MB] (29 MBps) [2024-11-26T17:46:50.479Z] Copying: 658/1024 [MB] (29 MBps) [2024-11-26T17:46:51.418Z] Copying: 687/1024 [MB] (28 MBps) [2024-11-26T17:46:52.356Z] Copying: 716/1024 [MB] (29 MBps) [2024-11-26T17:46:53.296Z] Copying: 746/1024 [MB] (29 MBps) [2024-11-26T17:46:54.238Z] Copying: 775/1024 [MB] (29 MBps) [2024-11-26T17:46:55.178Z] Copying: 804/1024 [MB] (29 MBps) [2024-11-26T17:46:56.118Z] Copying: 833/1024 [MB] (29 MBps) [2024-11-26T17:46:57.058Z] Copying: 862/1024 [MB] (28 MBps) [2024-11-26T17:46:58.438Z] Copying: 890/1024 [MB] (27 MBps) [2024-11-26T17:46:59.374Z] Copying: 918/1024 [MB] (28 MBps) [2024-11-26T17:47:00.312Z] Copying: 946/1024 [MB] (28 MBps) [2024-11-26T17:47:01.251Z] Copying: 975/1024 [MB] (28 MBps) [2024-11-26T17:47:02.213Z] Copying: 1003/1024 [MB] (28 MBps) [2024-11-26T17:47:02.783Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-26T17:47:02.783Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 17:47:02.737830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.337 [2024-11-26 17:47:02.738081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:25.337 [2024-11-26 17:47:02.738107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:25.337 [2024-11-26 17:47:02.738118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.337 [2024-11-26 17:47:02.740806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:25.337 [2024-11-26 17:47:02.746913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.337 [2024-11-26 17:47:02.746962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:25.337 [2024-11-26 17:47:02.746981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:50:25.337 [2024-11-26 17:47:02.747009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.337 [2024-11-26 17:47:02.758790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.337 [2024-11-26 17:47:02.758856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:25.337 [2024-11-26 17:47:02.758873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.885 ms 00:50:25.337 [2024-11-26 17:47:02.758883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.337 [2024-11-26 17:47:02.780684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.337 [2024-11-26 17:47:02.780824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:25.337 [2024-11-26 17:47:02.780845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.814 ms 00:50:25.337 [2024-11-26 17:47:02.780858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.596 [2024-11-26 17:47:02.786707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.596 [2024-11-26 17:47:02.786785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:25.596 [2024-11-26 17:47:02.786801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:50:25.596 [2024-11-26 17:47:02.786811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.596 [2024-11-26 17:47:02.830992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.596 [2024-11-26 17:47:02.831078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:25.596 [2024-11-26 17:47:02.831097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.152 ms 00:50:25.596 [2024-11-26 17:47:02.831106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.596 [2024-11-26 17:47:02.856449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.597 [2024-11-26 17:47:02.856541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:25.597 [2024-11-26 17:47:02.856560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.304 ms 00:50:25.597 [2024-11-26 17:47:02.856570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.597 [2024-11-26 17:47:02.957961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.597 [2024-11-26 17:47:02.958072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:25.597 [2024-11-26 17:47:02.958112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.472 ms 00:50:25.597 [2024-11-26 17:47:02.958122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.597 [2024-11-26 17:47:03.002720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.597 [2024-11-26 17:47:03.002823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:25.597 [2024-11-26 17:47:03.002841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.659 ms 00:50:25.597 [2024-11-26 17:47:03.002876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.856 [2024-11-26 17:47:03.045741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.856 [2024-11-26 17:47:03.045827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:25.856 [2024-11-26 17:47:03.045846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.842 ms 00:50:25.856 [2024-11-26 17:47:03.045854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.856 [2024-11-26 17:47:03.088601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.856 [2024-11-26 17:47:03.088803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:25.856 [2024-11-26 17:47:03.088823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.734 ms 00:50:25.856 [2024-11-26 17:47:03.088832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.856 [2024-11-26 17:47:03.130503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.856 [2024-11-26 17:47:03.130587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:25.856 [2024-11-26 17:47:03.130604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.588 ms 00:50:25.856 [2024-11-26 17:47:03.130629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.856 [2024-11-26 17:47:03.130722] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:25.856 [2024-11-26 17:47:03.130743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114432 / 261120 wr_cnt: 1 state: open 00:50:25.856 [2024-11-26 17:47:03.130755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:25.856 [2024-11-26 17:47:03.130880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.130999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:25.857 [2024-11-26 17:47:03.131558] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:25.857 [2024-11-26 17:47:03.131566] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4ea280-838a-45d3-81ea-2a0df37d5369 00:50:25.857 [2024-11-26 17:47:03.131606] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114432 00:50:25.857 [2024-11-26 17:47:03.131614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115392 00:50:25.857 [2024-11-26 17:47:03.131632] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114432 00:50:25.857 [2024-11-26 17:47:03.131642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:50:25.857 [2024-11-26 17:47:03.131649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:25.857 [2024-11-26 17:47:03.131658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:25.858 [2024-11-26 17:47:03.131664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:25.858 [2024-11-26 17:47:03.131671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:25.858 [2024-11-26 17:47:03.131677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:25.858 [2024-11-26 17:47:03.131686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.858 [2024-11-26 17:47:03.131694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:25.858 [2024-11-26 17:47:03.131704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:50:25.858 [2024-11-26 17:47:03.131712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.858 [2024-11-26 17:47:03.154396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.858 [2024-11-26 17:47:03.154473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:25.858 [2024-11-26 17:47:03.154488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:50:25.858 [2024-11-26 17:47:03.154496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.858 [2024-11-26 17:47:03.155167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:25.858 [2024-11-26 17:47:03.155183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:25.858 [2024-11-26 17:47:03.155202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:50:25.858 [2024-11-26 17:47:03.155210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.858 [2024-11-26 17:47:03.212358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:25.858 [2024-11-26 17:47:03.212437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:25.858 [2024-11-26 17:47:03.212453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:25.858 [2024-11-26 17:47:03.212462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.858 [2024-11-26 17:47:03.212559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:25.858 [2024-11-26 17:47:03.212568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:25.858 [2024-11-26 17:47:03.212581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:25.858 [2024-11-26 17:47:03.212590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.858 [2024-11-26 17:47:03.212691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:25.858 [2024-11-26 17:47:03.212705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:25.858 [2024-11-26 17:47:03.212715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:25.858 [2024-11-26 17:47:03.212724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:25.858 [2024-11-26 17:47:03.212742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:25.858 [2024-11-26 17:47:03.212752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:25.858 [2024-11-26 17:47:03.212759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:25.858 [2024-11-26 17:47:03.212767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.117 [2024-11-26 17:47:03.355025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.117 [2024-11-26 17:47:03.355244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:26.117 [2024-11-26 17:47:03.355265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.117 [2024-11-26 17:47:03.355274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.117 [2024-11-26 17:47:03.473070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.117 [2024-11-26 17:47:03.473154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:26.117 [2024-11-26 17:47:03.473169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.117 [2024-11-26 17:47:03.473192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.117 [2024-11-26 17:47:03.473303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.117 [2024-11-26 17:47:03.473314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:26.117 [2024-11-26 17:47:03.473323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.117 [2024-11-26 17:47:03.473330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.117 [2024-11-26 17:47:03.473382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.117 [2024-11-26 17:47:03.473393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:26.117 [2024-11-26 17:47:03.473403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.117 [2024-11-26 17:47:03.473410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.117 [2024-11-26 17:47:03.473565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.118 [2024-11-26 17:47:03.473577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:26.118 [2024-11-26 17:47:03.473586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.118 [2024-11-26 17:47:03.473595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.118 [2024-11-26 17:47:03.473651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.118 [2024-11-26 17:47:03.473663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:26.118 [2024-11-26 17:47:03.473672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.118 [2024-11-26 17:47:03.473680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.118 [2024-11-26 17:47:03.473732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.118 [2024-11-26 17:47:03.473743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:26.118 [2024-11-26 17:47:03.473751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.118 [2024-11-26 17:47:03.473758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.118 [2024-11-26 17:47:03.473809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:26.118 [2024-11-26 17:47:03.473820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:26.118 [2024-11-26 17:47:03.473828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:26.118 [2024-11-26 17:47:03.473836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:26.118 [2024-11-26 17:47:03.473987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 740.470 ms, result 0 00:50:28.670 00:50:28.670 00:50:28.670 17:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:50:30.579 17:47:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:30.579 [2024-11-26 17:47:07.926525] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:50:30.579 [2024-11-26 17:47:07.926775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82978 ] 00:50:30.838 [2024-11-26 17:47:08.101859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:30.838 [2024-11-26 17:47:08.251591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:31.407 [2024-11-26 17:47:08.705546] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:31.407 [2024-11-26 17:47:08.705816] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:31.668 [2024-11-26 17:47:08.871397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.871617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:31.668 [2024-11-26 17:47:08.871639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:31.668 [2024-11-26 17:47:08.871649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.871736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.871752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:31.668 [2024-11-26 17:47:08.871763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:50:31.668 [2024-11-26 17:47:08.871772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.871798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:31.668 [2024-11-26 17:47:08.873049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:31.668 [2024-11-26 17:47:08.873081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.873091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:31.668 [2024-11-26 17:47:08.873102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:50:31.668 [2024-11-26 17:47:08.873111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.875772] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:31.668 [2024-11-26 17:47:08.899921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.900005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:31.668 [2024-11-26 17:47:08.900022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.193 ms 00:50:31.668 [2024-11-26 17:47:08.900031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.900180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.900192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:31.668 [2024-11-26 17:47:08.900201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:50:31.668 [2024-11-26 17:47:08.900210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.915032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.915082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:31.668 [2024-11-26 17:47:08.915097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.726 ms 00:50:31.668 [2024-11-26 17:47:08.915112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.915226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.915247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:31.668 [2024-11-26 17:47:08.915256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:50:31.668 [2024-11-26 17:47:08.915265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.915358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.915369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:31.668 [2024-11-26 17:47:08.915378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:50:31.668 [2024-11-26 17:47:08.915386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.915423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:31.668 [2024-11-26 17:47:08.922144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.922300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:31.668 [2024-11-26 17:47:08.922324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.744 ms 00:50:31.668 [2024-11-26 17:47:08.922335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.922388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.668 [2024-11-26 17:47:08.922399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:31.668 [2024-11-26 17:47:08.922411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:50:31.668 [2024-11-26 17:47:08.922421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.668 [2024-11-26 17:47:08.922479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:31.668 [2024-11-26 17:47:08.922510] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:31.668 [2024-11-26 17:47:08.922556] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:31.668 [2024-11-26 17:47:08.922580] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:31.668 [2024-11-26 17:47:08.922714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:31.668 [2024-11-26 17:47:08.922729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:31.668 [2024-11-26 17:47:08.922755] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:31.668 [2024-11-26 17:47:08.922769] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:31.668 [2024-11-26 17:47:08.922780] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:31.668 [2024-11-26 17:47:08.922790] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:31.669 [2024-11-26 17:47:08.922801] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:31.669 [2024-11-26 17:47:08.922814] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:31.669 [2024-11-26 17:47:08.922824] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:31.669 [2024-11-26 17:47:08.922834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.669 [2024-11-26 17:47:08.922844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:31.669 [2024-11-26 17:47:08.922854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:50:31.669 [2024-11-26 17:47:08.922863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.669 [2024-11-26 17:47:08.922952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.669 [2024-11-26 17:47:08.922964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:31.669 [2024-11-26 17:47:08.922974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:50:31.669 [2024-11-26 17:47:08.922982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.669 [2024-11-26 17:47:08.923112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:31.669 [2024-11-26 17:47:08.923131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:31.669 [2024-11-26 17:47:08.923141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:31.669 [2024-11-26 17:47:08.923168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:31.669 [2024-11-26 17:47:08.923194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:31.669 [2024-11-26 17:47:08.923211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:31.669 [2024-11-26 17:47:08.923220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:31.669 [2024-11-26 17:47:08.923228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:31.669 [2024-11-26 17:47:08.923250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:31.669 [2024-11-26 17:47:08.923258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:50:31.669 [2024-11-26 17:47:08.923267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:31.669 [2024-11-26 17:47:08.923283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:31.669 [2024-11-26 17:47:08.923306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:31.669 [2024-11-26 17:47:08.923331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:31.669 [2024-11-26 17:47:08.923364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:31.669 [2024-11-26 17:47:08.923384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:31.669 [2024-11-26 17:47:08.923403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:31.669 [2024-11-26 17:47:08.923416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:31.669 [2024-11-26 17:47:08.923422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:50:31.669 [2024-11-26 17:47:08.923429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:31.669 [2024-11-26 17:47:08.923436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:31.669 [2024-11-26 17:47:08.923443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:50:31.669 [2024-11-26 17:47:08.923450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:31.669 [2024-11-26 17:47:08.923466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:50:31.669 [2024-11-26 17:47:08.923472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923480] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:31.669 [2024-11-26 17:47:08.923488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:31.669 [2024-11-26 17:47:08.923496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:31.669 [2024-11-26 17:47:08.923513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:31.669 [2024-11-26 17:47:08.923520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:31.669 [2024-11-26 17:47:08.923528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:31.669 [2024-11-26 17:47:08.923535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:31.669 [2024-11-26 17:47:08.923542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:31.669 [2024-11-26 17:47:08.923549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:31.669 [2024-11-26 17:47:08.923559] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:31.669 [2024-11-26 17:47:08.923569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:31.669 [2024-11-26 17:47:08.923583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:31.669 [2024-11-26 17:47:08.923590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:50:31.669 [2024-11-26 17:47:08.923598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:50:31.669 [2024-11-26 17:47:08.923605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:50:31.669 [2024-11-26 17:47:08.923612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:50:31.669 [2024-11-26 17:47:08.923621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:50:31.669 [2024-11-26 17:47:08.923640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:50:31.669 [2024-11-26 17:47:08.923649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:50:31.669 [2024-11-26 17:47:08.923657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:50:31.669 [2024-11-26 17:47:08.923665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:50:31.670 [2024-11-26 17:47:08.923673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:50:31.670 [2024-11-26 17:47:08.923681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:50:31.670 [2024-11-26 17:47:08.923689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:50:31.670 [2024-11-26 17:47:08.923697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:50:31.670 [2024-11-26 17:47:08.923704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:31.670 [2024-11-26 17:47:08.923713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:31.670 [2024-11-26 17:47:08.923723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:31.670 [2024-11-26 17:47:08.923733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:31.670 [2024-11-26 17:47:08.923741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:31.670 [2024-11-26 17:47:08.923749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:31.670 [2024-11-26 17:47:08.923758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:08.923768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:31.670 [2024-11-26 17:47:08.923777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:50:31.670 [2024-11-26 17:47:08.923785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:08.977883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:08.977961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:31.670 [2024-11-26 17:47:08.977979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.142 ms 00:50:31.670 [2024-11-26 17:47:08.977995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:08.978128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:08.978139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:31.670 [2024-11-26 17:47:08.978148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:50:31.670 [2024-11-26 17:47:08.978157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:09.048525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:09.048724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:31.670 [2024-11-26 17:47:09.048746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.367 ms 00:50:31.670 [2024-11-26 17:47:09.048757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:09.048842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:09.048859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:31.670 [2024-11-26 17:47:09.048868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:31.670 [2024-11-26 17:47:09.048876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:09.049796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:09.049815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:31.670 [2024-11-26 17:47:09.049827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:50:31.670 [2024-11-26 17:47:09.049836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:09.049984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:09.050000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:31.670 [2024-11-26 17:47:09.050019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:50:31.670 [2024-11-26 17:47:09.050028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:09.075186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:09.075285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:31.670 [2024-11-26 17:47:09.075302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.175 ms 00:50:31.670 [2024-11-26 17:47:09.075311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.670 [2024-11-26 17:47:09.100227] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:50:31.670 [2024-11-26 17:47:09.100307] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:31.670 [2024-11-26 17:47:09.100328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.670 [2024-11-26 17:47:09.100339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:31.670 [2024-11-26 17:47:09.100352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.873 ms 00:50:31.670 [2024-11-26 17:47:09.100361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.930 [2024-11-26 17:47:09.138490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.930 [2024-11-26 17:47:09.138604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:31.930 [2024-11-26 17:47:09.138636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.092 ms 00:50:31.930 [2024-11-26 17:47:09.138648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.930 [2024-11-26 17:47:09.163824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.930 [2024-11-26 17:47:09.163931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:31.930 [2024-11-26 17:47:09.163947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.099 ms 00:50:31.930 [2024-11-26 17:47:09.163957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.930 [2024-11-26 17:47:09.187679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.930 [2024-11-26 17:47:09.187769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:31.930 [2024-11-26 17:47:09.187785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.661 ms 00:50:31.930 [2024-11-26 17:47:09.187794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.930 [2024-11-26 17:47:09.188877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.930 [2024-11-26 17:47:09.188908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:31.930 [2024-11-26 17:47:09.188925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:50:31.930 [2024-11-26 17:47:09.188935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.930 [2024-11-26 17:47:09.304460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.930 [2024-11-26 17:47:09.304565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:31.930 [2024-11-26 17:47:09.304596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.709 ms 00:50:31.930 [2024-11-26 17:47:09.304627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.930 [2024-11-26 17:47:09.323041] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:31.930 [2024-11-26 17:47:09.328846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.930 [2024-11-26 17:47:09.328910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:31.930 [2024-11-26 17:47:09.328928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:50:31.930 [2024-11-26 17:47:09.328939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.931 [2024-11-26 17:47:09.329126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.931 [2024-11-26 17:47:09.329142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:31.931 [2024-11-26 17:47:09.329159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:50:31.931 [2024-11-26 17:47:09.329168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.931 [2024-11-26 17:47:09.331551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.931 [2024-11-26 17:47:09.331711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:31.931 [2024-11-26 17:47:09.331731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.339 ms 00:50:31.931 [2024-11-26 17:47:09.331741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.931 [2024-11-26 17:47:09.331797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.931 [2024-11-26 17:47:09.331811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:31.931 [2024-11-26 17:47:09.331821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:31.931 [2024-11-26 17:47:09.331830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:31.931 [2024-11-26 17:47:09.331881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:31.931 [2024-11-26 17:47:09.331893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:31.931 [2024-11-26 17:47:09.331903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:31.931 [2024-11-26 17:47:09.331913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:31.931 [2024-11-26 17:47:09.331921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.190 [2024-11-26 17:47:09.381684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.190 [2024-11-26 17:47:09.381776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:32.190 [2024-11-26 17:47:09.381825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.829 ms 00:50:32.190 [2024-11-26 17:47:09.381835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.190 [2024-11-26 17:47:09.381991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.190 [2024-11-26 17:47:09.382004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:32.190 [2024-11-26 17:47:09.382014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:50:32.190 [2024-11-26 17:47:09.382023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.190 [2024-11-26 17:47:09.385769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 513.722 ms, result 0 00:50:33.571  [2024-11-26T17:47:11.588Z] Copying: 928/1048576 [kB] (928 kBps) [2024-11-26T17:47:12.971Z] Copying: 4956/1048576 [kB] (4028 kBps) [2024-11-26T17:47:13.910Z] Copying: 32/1024 [MB] (27 MBps) [2024-11-26T17:47:14.848Z] Copying: 67/1024 [MB] (34 MBps) [2024-11-26T17:47:15.788Z] Copying: 103/1024 [MB] (35 MBps) [2024-11-26T17:47:16.727Z] Copying: 137/1024 [MB] (34 MBps) [2024-11-26T17:47:17.667Z] Copying: 170/1024 [MB] (33 MBps) [2024-11-26T17:47:18.606Z] Copying: 203/1024 [MB] (33 MBps) [2024-11-26T17:47:20.019Z] Copying: 237/1024 [MB] (33 MBps) [2024-11-26T17:47:20.589Z] Copying: 270/1024 [MB] (33 MBps) [2024-11-26T17:47:21.972Z] Copying: 304/1024 [MB] (34 MBps) [2024-11-26T17:47:22.910Z] Copying: 338/1024 [MB] (33 MBps) [2024-11-26T17:47:23.849Z] Copying: 372/1024 [MB] (34 MBps) [2024-11-26T17:47:24.788Z] Copying: 407/1024 [MB] (34 MBps) [2024-11-26T17:47:25.726Z] Copying: 441/1024 [MB] (34 MBps) [2024-11-26T17:47:26.663Z] Copying: 477/1024 [MB] (35 MBps) [2024-11-26T17:47:27.601Z] Copying: 510/1024 [MB] (33 MBps) [2024-11-26T17:47:28.981Z] Copying: 544/1024 [MB] (33 MBps) [2024-11-26T17:47:29.550Z] Copying: 579/1024 [MB] (35 MBps) [2024-11-26T17:47:30.942Z] Copying: 614/1024 [MB] (35 MBps) [2024-11-26T17:47:31.881Z] Copying: 648/1024 [MB] (33 MBps) [2024-11-26T17:47:32.818Z] Copying: 680/1024 [MB] (32 MBps) [2024-11-26T17:47:33.756Z] Copying: 715/1024 [MB] (34 MBps) [2024-11-26T17:47:34.694Z] Copying: 749/1024 [MB] (34 MBps) [2024-11-26T17:47:35.630Z] Copying: 782/1024 [MB] (33 MBps) [2024-11-26T17:47:36.566Z] Copying: 816/1024 [MB] (33 MBps) [2024-11-26T17:47:37.945Z] Copying: 850/1024 [MB] (34 MBps) [2024-11-26T17:47:38.540Z] Copying: 884/1024 [MB] (33 MBps) [2024-11-26T17:47:39.921Z] Copying: 918/1024 [MB] (33 MBps) [2024-11-26T17:47:40.860Z] Copying: 952/1024 [MB] (34 MBps) [2024-11-26T17:47:41.799Z] Copying: 987/1024 [MB] (34 MBps) [2024-11-26T17:47:41.799Z] Copying: 1020/1024 [MB] (33 MBps) [2024-11-26T17:47:42.060Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-26 17:47:42.053380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.614 [2024-11-26 17:47:42.053483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:04.614 [2024-11-26 17:47:42.053501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:51:04.614 [2024-11-26 17:47:42.053512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.614 [2024-11-26 17:47:42.053551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:04.874 [2024-11-26 17:47:42.059664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.059745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:04.874 [2024-11-26 17:47:42.059763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.104 ms 00:51:04.874 [2024-11-26 17:47:42.059773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.060197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.060223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:04.874 [2024-11-26 17:47:42.060252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:51:04.874 [2024-11-26 17:47:42.060261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.073953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.074035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:04.874 [2024-11-26 17:47:42.074053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.692 ms 00:51:04.874 [2024-11-26 17:47:42.074065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.079770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.079815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:04.874 [2024-11-26 17:47:42.079841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.666 ms 00:51:04.874 [2024-11-26 17:47:42.079849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.126575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.126684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:04.874 [2024-11-26 17:47:42.126702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.744 ms 00:51:04.874 [2024-11-26 17:47:42.126712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.154429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.154525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:04.874 [2024-11-26 17:47:42.154544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.651 ms 00:51:04.874 [2024-11-26 17:47:42.154554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.156879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.157013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:04.874 [2024-11-26 17:47:42.157034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:51:04.874 [2024-11-26 17:47:42.157052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.203487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.203577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:04.874 [2024-11-26 17:47:42.203593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.496 ms 00:51:04.874 [2024-11-26 17:47:42.203602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.250088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.250179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:04.874 [2024-11-26 17:47:42.250195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.466 ms 00:51:04.874 [2024-11-26 17:47:42.250204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:04.874 [2024-11-26 17:47:42.298790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:04.874 [2024-11-26 17:47:42.298987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:04.874 [2024-11-26 17:47:42.299034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.572 ms 00:51:04.874 [2024-11-26 17:47:42.299055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.147 [2024-11-26 17:47:42.347379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:05.147 [2024-11-26 17:47:42.347573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:05.147 [2024-11-26 17:47:42.347618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.221 ms 00:51:05.147 [2024-11-26 17:47:42.347641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.147 [2024-11-26 17:47:42.347754] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:05.147 [2024-11-26 17:47:42.347837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:05.147 [2024-11-26 17:47:42.347875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:51:05.147 [2024-11-26 17:47:42.347886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.347994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:05.147 [2024-11-26 17:47:42.348063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:05.148 [2024-11-26 17:47:42.348669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:05.149 [2024-11-26 17:47:42.348678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:05.149 [2024-11-26 17:47:42.348686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:05.149 [2024-11-26 17:47:42.348694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:05.149 [2024-11-26 17:47:42.348702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:05.149 [2024-11-26 17:47:42.348710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:05.149 [2024-11-26 17:47:42.348728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:05.149 [2024-11-26 17:47:42.348737] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4ea280-838a-45d3-81ea-2a0df37d5369 00:51:05.149 [2024-11-26 17:47:42.348748] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:51:05.149 [2024-11-26 17:47:42.348756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 150208 00:51:05.149 [2024-11-26 17:47:42.348770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 148224 00:51:05.149 [2024-11-26 17:47:42.348779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0134 00:51:05.149 [2024-11-26 17:47:42.348787] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:05.149 [2024-11-26 17:47:42.348817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:05.149 [2024-11-26 17:47:42.348826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:05.149 [2024-11-26 17:47:42.348833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:05.149 [2024-11-26 17:47:42.348840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:05.149 [2024-11-26 17:47:42.348848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:05.149 [2024-11-26 17:47:42.348857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:05.149 [2024-11-26 17:47:42.348866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:51:05.149 [2024-11-26 17:47:42.348874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.373009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:05.149 [2024-11-26 17:47:42.373188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:05.149 [2024-11-26 17:47:42.373207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.106 ms 00:51:05.149 [2024-11-26 17:47:42.373216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.373958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:05.149 [2024-11-26 17:47:42.373977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:05.149 [2024-11-26 17:47:42.373987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:51:05.149 [2024-11-26 17:47:42.373995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.437301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.149 [2024-11-26 17:47:42.437382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:05.149 [2024-11-26 17:47:42.437398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.149 [2024-11-26 17:47:42.437408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.437501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.149 [2024-11-26 17:47:42.437511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:05.149 [2024-11-26 17:47:42.437519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.149 [2024-11-26 17:47:42.437527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.437686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.149 [2024-11-26 17:47:42.437702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:05.149 [2024-11-26 17:47:42.437713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.149 [2024-11-26 17:47:42.437722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.437742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.149 [2024-11-26 17:47:42.437752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:05.149 [2024-11-26 17:47:42.437762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.149 [2024-11-26 17:47:42.437771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.149 [2024-11-26 17:47:42.582907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.149 [2024-11-26 17:47:42.583095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:05.149 [2024-11-26 17:47:42.583116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.149 [2024-11-26 17:47:42.583125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.706835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.424 [2024-11-26 17:47:42.706914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:05.424 [2024-11-26 17:47:42.706931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.424 [2024-11-26 17:47:42.706941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.707059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.424 [2024-11-26 17:47:42.707083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:05.424 [2024-11-26 17:47:42.707092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.424 [2024-11-26 17:47:42.707101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.707149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.424 [2024-11-26 17:47:42.707159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:05.424 [2024-11-26 17:47:42.707168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.424 [2024-11-26 17:47:42.707176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.707294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.424 [2024-11-26 17:47:42.707307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:05.424 [2024-11-26 17:47:42.707322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.424 [2024-11-26 17:47:42.707330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.707367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.424 [2024-11-26 17:47:42.707377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:05.424 [2024-11-26 17:47:42.707386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.424 [2024-11-26 17:47:42.707395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.707438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.424 [2024-11-26 17:47:42.707448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:05.424 [2024-11-26 17:47:42.707460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.424 [2024-11-26 17:47:42.707469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.424 [2024-11-26 17:47:42.707529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:05.425 [2024-11-26 17:47:42.707540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:05.425 [2024-11-26 17:47:42.707548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:05.425 [2024-11-26 17:47:42.707556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:05.425 [2024-11-26 17:47:42.707721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 655.590 ms, result 0 00:51:06.812 00:51:06.812 00:51:06.812 17:47:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:51:08.722 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:51:08.722 17:47:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:08.722 [2024-11-26 17:47:45.792941] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:51:08.722 [2024-11-26 17:47:45.793161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83350 ] 00:51:08.722 [2024-11-26 17:47:45.965943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:08.722 [2024-11-26 17:47:46.118015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:09.293 [2024-11-26 17:47:46.572118] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:09.293 [2024-11-26 17:47:46.572214] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:09.555 [2024-11-26 17:47:46.737977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.738067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:51:09.555 [2024-11-26 17:47:46.738086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:09.555 [2024-11-26 17:47:46.738096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.738172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.738188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:09.555 [2024-11-26 17:47:46.738198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:51:09.555 [2024-11-26 17:47:46.738208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.738234] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:51:09.555 [2024-11-26 17:47:46.739575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:51:09.555 [2024-11-26 17:47:46.739623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.739634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:09.555 [2024-11-26 17:47:46.739646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:51:09.555 [2024-11-26 17:47:46.739655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.742305] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:51:09.555 [2024-11-26 17:47:46.767366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.767596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:51:09.555 [2024-11-26 17:47:46.767630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.103 ms 00:51:09.555 [2024-11-26 17:47:46.767643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.767830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.767846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:51:09.555 [2024-11-26 17:47:46.767858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:51:09.555 [2024-11-26 17:47:46.767868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.783064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.783115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:09.555 [2024-11-26 17:47:46.783130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.093 ms 00:51:09.555 [2024-11-26 17:47:46.783146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.783260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.783280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:09.555 [2024-11-26 17:47:46.783290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:51:09.555 [2024-11-26 17:47:46.783299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.783391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.783402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:51:09.555 [2024-11-26 17:47:46.783411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:51:09.555 [2024-11-26 17:47:46.783419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.783456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:51:09.555 [2024-11-26 17:47:46.790668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.790732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:09.555 [2024-11-26 17:47:46.790754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.234 ms 00:51:09.555 [2024-11-26 17:47:46.790764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.790826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.555 [2024-11-26 17:47:46.790838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:51:09.555 [2024-11-26 17:47:46.790848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:51:09.555 [2024-11-26 17:47:46.790858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.555 [2024-11-26 17:47:46.790926] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:51:09.555 [2024-11-26 17:47:46.790955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:51:09.555 [2024-11-26 17:47:46.790997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:51:09.555 [2024-11-26 17:47:46.791020] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:51:09.555 [2024-11-26 17:47:46.791133] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:51:09.555 [2024-11-26 17:47:46.791146] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:51:09.555 [2024-11-26 17:47:46.791159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:51:09.556 [2024-11-26 17:47:46.791171] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791183] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791193] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:51:09.556 [2024-11-26 17:47:46.791203] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:51:09.556 [2024-11-26 17:47:46.791216] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:51:09.556 [2024-11-26 17:47:46.791226] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:51:09.556 [2024-11-26 17:47:46.791236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.556 [2024-11-26 17:47:46.791246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:51:09.556 [2024-11-26 17:47:46.791256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:51:09.556 [2024-11-26 17:47:46.791265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.556 [2024-11-26 17:47:46.791360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.556 [2024-11-26 17:47:46.791371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:51:09.556 [2024-11-26 17:47:46.791381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:51:09.556 [2024-11-26 17:47:46.791389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.556 [2024-11-26 17:47:46.791524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:51:09.556 [2024-11-26 17:47:46.791543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:51:09.556 [2024-11-26 17:47:46.791553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:51:09.556 [2024-11-26 17:47:46.791588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:51:09.556 [2024-11-26 17:47:46.791639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:09.556 [2024-11-26 17:47:46.791656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:51:09.556 [2024-11-26 17:47:46.791664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:51:09.556 [2024-11-26 17:47:46.791674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:09.556 [2024-11-26 17:47:46.791697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:51:09.556 [2024-11-26 17:47:46.791706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:51:09.556 [2024-11-26 17:47:46.791714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:51:09.556 [2024-11-26 17:47:46.791733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:51:09.556 [2024-11-26 17:47:46.791758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:51:09.556 [2024-11-26 17:47:46.791784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:51:09.556 [2024-11-26 17:47:46.791818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:51:09.556 [2024-11-26 17:47:46.791844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:51:09.556 [2024-11-26 17:47:46.791868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:09.556 [2024-11-26 17:47:46.791885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:51:09.556 [2024-11-26 17:47:46.791893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:51:09.556 [2024-11-26 17:47:46.791903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:09.556 [2024-11-26 17:47:46.791911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:51:09.556 [2024-11-26 17:47:46.791919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:51:09.556 [2024-11-26 17:47:46.791927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:51:09.556 [2024-11-26 17:47:46.791942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:51:09.556 [2024-11-26 17:47:46.791949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791957] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:51:09.556 [2024-11-26 17:47:46.791965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:51:09.556 [2024-11-26 17:47:46.791973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:09.556 [2024-11-26 17:47:46.791983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:09.556 [2024-11-26 17:47:46.791992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:51:09.556 [2024-11-26 17:47:46.792003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:51:09.556 [2024-11-26 17:47:46.792014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:51:09.556 [2024-11-26 17:47:46.792027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:51:09.556 [2024-11-26 17:47:46.792037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:51:09.556 [2024-11-26 17:47:46.792046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:51:09.556 [2024-11-26 17:47:46.792057] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:51:09.556 [2024-11-26 17:47:46.792069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:09.556 [2024-11-26 17:47:46.792089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:51:09.556 [2024-11-26 17:47:46.792099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:51:09.556 [2024-11-26 17:47:46.792109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:51:09.556 [2024-11-26 17:47:46.792119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:51:09.556 [2024-11-26 17:47:46.792128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:51:09.556 [2024-11-26 17:47:46.792137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:51:09.556 [2024-11-26 17:47:46.792145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:51:09.556 [2024-11-26 17:47:46.792154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:51:09.556 [2024-11-26 17:47:46.792177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:51:09.556 [2024-11-26 17:47:46.792185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:51:09.556 [2024-11-26 17:47:46.792194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:51:09.556 [2024-11-26 17:47:46.792203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:51:09.557 [2024-11-26 17:47:46.792211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:51:09.557 [2024-11-26 17:47:46.792219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:51:09.557 [2024-11-26 17:47:46.792227] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:51:09.557 [2024-11-26 17:47:46.792237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:09.557 [2024-11-26 17:47:46.792246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:09.557 [2024-11-26 17:47:46.792255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:51:09.557 [2024-11-26 17:47:46.792263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:51:09.557 [2024-11-26 17:47:46.792271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:51:09.557 [2024-11-26 17:47:46.792280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.792290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:51:09.557 [2024-11-26 17:47:46.792299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:51:09.557 [2024-11-26 17:47:46.792308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.847754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.847955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:09.557 [2024-11-26 17:47:46.847977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.483 ms 00:51:09.557 [2024-11-26 17:47:46.847993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.848117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.848126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:51:09.557 [2024-11-26 17:47:46.848137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:51:09.557 [2024-11-26 17:47:46.848145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.921520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.921635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:09.557 [2024-11-26 17:47:46.921653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.395 ms 00:51:09.557 [2024-11-26 17:47:46.921662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.921746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.921763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:09.557 [2024-11-26 17:47:46.921773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:51:09.557 [2024-11-26 17:47:46.921782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.922669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.922693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:09.557 [2024-11-26 17:47:46.922703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:51:09.557 [2024-11-26 17:47:46.922712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.922866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.922883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:09.557 [2024-11-26 17:47:46.922900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:51:09.557 [2024-11-26 17:47:46.922909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.947393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.947473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:09.557 [2024-11-26 17:47:46.947489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.499 ms 00:51:09.557 [2024-11-26 17:47:46.947498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.557 [2024-11-26 17:47:46.971861] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:51:09.557 [2024-11-26 17:47:46.971944] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:51:09.557 [2024-11-26 17:47:46.971962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.557 [2024-11-26 17:47:46.971971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:51:09.557 [2024-11-26 17:47:46.971985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.304 ms 00:51:09.557 [2024-11-26 17:47:46.971994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.818 [2024-11-26 17:47:47.009458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.818 [2024-11-26 17:47:47.009770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:51:09.818 [2024-11-26 17:47:47.009797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.428 ms 00:51:09.818 [2024-11-26 17:47:47.009808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.818 [2024-11-26 17:47:47.033906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.818 [2024-11-26 17:47:47.034003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:51:09.818 [2024-11-26 17:47:47.034022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.995 ms 00:51:09.818 [2024-11-26 17:47:47.034031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.818 [2024-11-26 17:47:47.057512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.818 [2024-11-26 17:47:47.057761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:51:09.818 [2024-11-26 17:47:47.057784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.421 ms 00:51:09.818 [2024-11-26 17:47:47.057794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.818 [2024-11-26 17:47:47.058889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.818 [2024-11-26 17:47:47.058931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:51:09.818 [2024-11-26 17:47:47.058947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:51:09.818 [2024-11-26 17:47:47.058956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.818 [2024-11-26 17:47:47.172717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.818 [2024-11-26 17:47:47.172814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:51:09.818 [2024-11-26 17:47:47.172842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.947 ms 00:51:09.818 [2024-11-26 17:47:47.172852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.190977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:51:09.819 [2024-11-26 17:47:47.196791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.196855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:51:09.819 [2024-11-26 17:47:47.196871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.872 ms 00:51:09.819 [2024-11-26 17:47:47.196881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.197049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.197063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:51:09.819 [2024-11-26 17:47:47.197080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:51:09.819 [2024-11-26 17:47:47.197088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.198535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.198571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:51:09.819 [2024-11-26 17:47:47.198582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:51:09.819 [2024-11-26 17:47:47.198591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.198636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.198648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:51:09.819 [2024-11-26 17:47:47.198659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:09.819 [2024-11-26 17:47:47.198668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.198716] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:51:09.819 [2024-11-26 17:47:47.198728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.198737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:51:09.819 [2024-11-26 17:47:47.198746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:51:09.819 [2024-11-26 17:47:47.198756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.246191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.246298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:51:09.819 [2024-11-26 17:47:47.246333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.498 ms 00:51:09.819 [2024-11-26 17:47:47.246344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.246491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:09.819 [2024-11-26 17:47:47.246505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:51:09.819 [2024-11-26 17:47:47.246516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:51:09.819 [2024-11-26 17:47:47.246525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:09.819 [2024-11-26 17:47:47.248236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 510.695 ms, result 0 00:51:11.200  [2024-11-26T17:47:49.600Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-26T17:47:50.557Z] Copying: 60/1024 [MB] (29 MBps) [2024-11-26T17:47:51.496Z] Copying: 88/1024 [MB] (28 MBps) [2024-11-26T17:47:52.445Z] Copying: 116/1024 [MB] (27 MBps) [2024-11-26T17:47:53.821Z] Copying: 143/1024 [MB] (27 MBps) [2024-11-26T17:47:54.760Z] Copying: 172/1024 [MB] (28 MBps) [2024-11-26T17:47:55.697Z] Copying: 200/1024 [MB] (28 MBps) [2024-11-26T17:47:56.636Z] Copying: 229/1024 [MB] (28 MBps) [2024-11-26T17:47:57.574Z] Copying: 261/1024 [MB] (31 MBps) [2024-11-26T17:47:58.511Z] Copying: 290/1024 [MB] (29 MBps) [2024-11-26T17:47:59.448Z] Copying: 316/1024 [MB] (26 MBps) [2024-11-26T17:48:00.834Z] Copying: 345/1024 [MB] (28 MBps) [2024-11-26T17:48:01.777Z] Copying: 378/1024 [MB] (33 MBps) [2024-11-26T17:48:02.716Z] Copying: 409/1024 [MB] (30 MBps) [2024-11-26T17:48:03.656Z] Copying: 440/1024 [MB] (31 MBps) [2024-11-26T17:48:04.597Z] Copying: 471/1024 [MB] (30 MBps) [2024-11-26T17:48:05.536Z] Copying: 501/1024 [MB] (30 MBps) [2024-11-26T17:48:06.510Z] Copying: 533/1024 [MB] (31 MBps) [2024-11-26T17:48:07.450Z] Copying: 565/1024 [MB] (31 MBps) [2024-11-26T17:48:08.826Z] Copying: 597/1024 [MB] (32 MBps) [2024-11-26T17:48:09.398Z] Copying: 630/1024 [MB] (33 MBps) [2024-11-26T17:48:10.778Z] Copying: 663/1024 [MB] (32 MBps) [2024-11-26T17:48:11.717Z] Copying: 695/1024 [MB] (32 MBps) [2024-11-26T17:48:12.656Z] Copying: 727/1024 [MB] (32 MBps) [2024-11-26T17:48:13.594Z] Copying: 761/1024 [MB] (33 MBps) [2024-11-26T17:48:14.532Z] Copying: 794/1024 [MB] (32 MBps) [2024-11-26T17:48:15.468Z] Copying: 827/1024 [MB] (33 MBps) [2024-11-26T17:48:16.404Z] Copying: 860/1024 [MB] (32 MBps) [2024-11-26T17:48:17.810Z] Copying: 892/1024 [MB] (31 MBps) [2024-11-26T17:48:18.394Z] Copying: 924/1024 [MB] (32 MBps) [2024-11-26T17:48:19.774Z] Copying: 957/1024 [MB] (32 MBps) [2024-11-26T17:48:20.712Z] Copying: 990/1024 [MB] (32 MBps) [2024-11-26T17:48:20.712Z] Copying: 1021/1024 [MB] (31 MBps) [2024-11-26T17:48:20.712Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-26 17:48:20.592290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.592385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:43.266 [2024-11-26 17:48:20.592411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:51:43.266 [2024-11-26 17:48:20.592427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.592469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:43.266 [2024-11-26 17:48:20.600195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.600328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:43.266 [2024-11-26 17:48:20.600370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.709 ms 00:51:43.266 [2024-11-26 17:48:20.600400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.600741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.600792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:43.266 [2024-11-26 17:48:20.600841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:51:43.266 [2024-11-26 17:48:20.600877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.605109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.605234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:43.266 [2024-11-26 17:48:20.605272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.722 ms 00:51:43.266 [2024-11-26 17:48:20.605309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.611140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.611256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:43.266 [2024-11-26 17:48:20.611287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.788 ms 00:51:43.266 [2024-11-26 17:48:20.611309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.657500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.657730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:43.266 [2024-11-26 17:48:20.657769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.149 ms 00:51:43.266 [2024-11-26 17:48:20.657808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.684210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.684357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:43.266 [2024-11-26 17:48:20.684398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.331 ms 00:51:43.266 [2024-11-26 17:48:20.684422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.266 [2024-11-26 17:48:20.686300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.266 [2024-11-26 17:48:20.686401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:43.266 [2024-11-26 17:48:20.686442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:51:43.266 [2024-11-26 17:48:20.686467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.527 [2024-11-26 17:48:20.733360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.527 [2024-11-26 17:48:20.733502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:43.527 [2024-11-26 17:48:20.733541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.939 ms 00:51:43.527 [2024-11-26 17:48:20.733572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.527 [2024-11-26 17:48:20.777981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.527 [2024-11-26 17:48:20.778164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:43.527 [2024-11-26 17:48:20.778202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.350 ms 00:51:43.527 [2024-11-26 17:48:20.778223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.527 [2024-11-26 17:48:20.822190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.527 [2024-11-26 17:48:20.822342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:43.527 [2024-11-26 17:48:20.822382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.921 ms 00:51:43.527 [2024-11-26 17:48:20.822422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.527 [2024-11-26 17:48:20.866269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.527 [2024-11-26 17:48:20.866467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:43.527 [2024-11-26 17:48:20.866505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.688 ms 00:51:43.527 [2024-11-26 17:48:20.866528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.527 [2024-11-26 17:48:20.866663] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:43.527 [2024-11-26 17:48:20.866727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:43.527 [2024-11-26 17:48:20.866781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:51:43.527 [2024-11-26 17:48:20.866832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.866873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.866920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.866955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.866990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.867982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.868025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.868063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.868101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.868142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:43.527 [2024-11-26 17:48:20.868182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.868986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:43.528 [2024-11-26 17:48:20.869004] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:43.528 [2024-11-26 17:48:20.869012] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4ea280-838a-45d3-81ea-2a0df37d5369 00:51:43.528 [2024-11-26 17:48:20.869022] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:51:43.528 [2024-11-26 17:48:20.869031] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:51:43.528 [2024-11-26 17:48:20.869040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:51:43.528 [2024-11-26 17:48:20.869049] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:51:43.528 [2024-11-26 17:48:20.869077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:43.528 [2024-11-26 17:48:20.869087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:43.528 [2024-11-26 17:48:20.869096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:43.528 [2024-11-26 17:48:20.869103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:43.528 [2024-11-26 17:48:20.869110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:43.528 [2024-11-26 17:48:20.869119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.528 [2024-11-26 17:48:20.869128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:43.528 [2024-11-26 17:48:20.869138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:51:43.528 [2024-11-26 17:48:20.869150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.528 [2024-11-26 17:48:20.891410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.528 [2024-11-26 17:48:20.891583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:43.528 [2024-11-26 17:48:20.891601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.236 ms 00:51:43.528 [2024-11-26 17:48:20.891630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.528 [2024-11-26 17:48:20.892311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:43.528 [2024-11-26 17:48:20.892337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:43.528 [2024-11-26 17:48:20.892347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:51:43.528 [2024-11-26 17:48:20.892356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.528 [2024-11-26 17:48:20.947920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.528 [2024-11-26 17:48:20.948104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:43.528 [2024-11-26 17:48:20.948121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.528 [2024-11-26 17:48:20.948130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.528 [2024-11-26 17:48:20.948239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.528 [2024-11-26 17:48:20.948257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:43.528 [2024-11-26 17:48:20.948265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.528 [2024-11-26 17:48:20.948274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.528 [2024-11-26 17:48:20.948401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.528 [2024-11-26 17:48:20.948415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:43.528 [2024-11-26 17:48:20.948424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.529 [2024-11-26 17:48:20.948432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.529 [2024-11-26 17:48:20.948451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.529 [2024-11-26 17:48:20.948460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:43.529 [2024-11-26 17:48:20.948474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.529 [2024-11-26 17:48:20.948482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.087461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.087550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:43.788 [2024-11-26 17:48:21.087565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.087574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.203327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.203533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:43.788 [2024-11-26 17:48:21.203551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.203560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.203715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.203729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:43.788 [2024-11-26 17:48:21.203739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.203748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.203802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.203813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:43.788 [2024-11-26 17:48:21.203822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.203836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.203978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.203993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:43.788 [2024-11-26 17:48:21.204002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.204011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.204054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.204067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:43.788 [2024-11-26 17:48:21.204077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.204085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.204138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.204149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:43.788 [2024-11-26 17:48:21.204158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.204167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.204219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:43.788 [2024-11-26 17:48:21.204230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:43.788 [2024-11-26 17:48:21.204239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:43.788 [2024-11-26 17:48:21.204253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:43.788 [2024-11-26 17:48:21.204401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 613.267 ms, result 0 00:51:45.169 00:51:45.169 00:51:45.169 17:48:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:51:47.099 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81697 00:51:47.099 Process with pid 81697 is not found 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81697 ']' 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81697 00:51:47.099 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81697) - No such process 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81697 is not found' 00:51:47.099 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:51:47.358 Remove shared memory files 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:51:47.358 ************************************ 00:51:47.358 END TEST ftl_dirty_shutdown 00:51:47.358 ************************************ 00:51:47.358 00:51:47.358 real 3m18.118s 00:51:47.358 user 3m44.450s 00:51:47.358 sys 0m31.550s 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:47.358 17:48:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:47.358 17:48:24 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:51:47.358 17:48:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:51:47.358 17:48:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:47.358 17:48:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:51:47.358 ************************************ 00:51:47.358 START TEST ftl_upgrade_shutdown 00:51:47.358 ************************************ 00:51:47.358 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:51:47.617 * Looking for test storage... 00:51:47.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:47.617 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:51:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:47.618 --rc genhtml_branch_coverage=1 00:51:47.618 --rc genhtml_function_coverage=1 00:51:47.618 --rc genhtml_legend=1 00:51:47.618 --rc geninfo_all_blocks=1 00:51:47.618 --rc geninfo_unexecuted_blocks=1 00:51:47.618 00:51:47.618 ' 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:51:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:47.618 --rc genhtml_branch_coverage=1 00:51:47.618 --rc genhtml_function_coverage=1 00:51:47.618 --rc genhtml_legend=1 00:51:47.618 --rc geninfo_all_blocks=1 00:51:47.618 --rc geninfo_unexecuted_blocks=1 00:51:47.618 00:51:47.618 ' 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:51:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:47.618 --rc genhtml_branch_coverage=1 00:51:47.618 --rc genhtml_function_coverage=1 00:51:47.618 --rc genhtml_legend=1 00:51:47.618 --rc geninfo_all_blocks=1 00:51:47.618 --rc geninfo_unexecuted_blocks=1 00:51:47.618 00:51:47.618 ' 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:51:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:47.618 --rc genhtml_branch_coverage=1 00:51:47.618 --rc genhtml_function_coverage=1 00:51:47.618 --rc genhtml_legend=1 00:51:47.618 --rc geninfo_all_blocks=1 00:51:47.618 --rc geninfo_unexecuted_blocks=1 00:51:47.618 00:51:47.618 ' 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:51:47.618 17:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83812 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83812 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83812 ']' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:47.618 17:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:47.877 [2024-11-26 17:48:25.139031] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:51:47.877 [2024-11-26 17:48:25.139692] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83812 ] 00:51:47.877 [2024-11-26 17:48:25.319148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:48.137 [2024-11-26 17:48:25.468942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:51:49.515 17:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:51:49.774 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:51:49.774 { 00:51:49.774 "name": "basen1", 00:51:49.774 "aliases": [ 00:51:49.774 "c93fbc56-a4bf-42f5-8c31-f7ac4a288721" 00:51:49.774 ], 00:51:49.774 "product_name": "NVMe disk", 00:51:49.774 "block_size": 4096, 00:51:49.774 "num_blocks": 1310720, 00:51:49.774 "uuid": "c93fbc56-a4bf-42f5-8c31-f7ac4a288721", 00:51:49.774 "numa_id": -1, 00:51:49.774 "assigned_rate_limits": { 00:51:49.774 "rw_ios_per_sec": 0, 00:51:49.774 "rw_mbytes_per_sec": 0, 00:51:49.774 "r_mbytes_per_sec": 0, 00:51:49.774 "w_mbytes_per_sec": 0 00:51:49.774 }, 00:51:49.774 "claimed": true, 00:51:49.774 "claim_type": "read_many_write_one", 00:51:49.774 "zoned": false, 00:51:49.774 "supported_io_types": { 00:51:49.774 "read": true, 00:51:49.774 "write": true, 00:51:49.774 "unmap": true, 00:51:49.774 "flush": true, 00:51:49.774 "reset": true, 00:51:49.774 "nvme_admin": true, 00:51:49.774 "nvme_io": true, 00:51:49.774 "nvme_io_md": false, 00:51:49.774 "write_zeroes": true, 00:51:49.774 "zcopy": false, 00:51:49.774 "get_zone_info": false, 00:51:49.774 "zone_management": false, 00:51:49.774 "zone_append": false, 00:51:49.774 "compare": true, 00:51:49.774 "compare_and_write": false, 00:51:49.774 "abort": true, 00:51:49.774 "seek_hole": false, 00:51:49.774 "seek_data": false, 00:51:49.774 "copy": true, 00:51:49.774 "nvme_iov_md": false 00:51:49.774 }, 00:51:49.774 "driver_specific": { 00:51:49.774 "nvme": [ 00:51:49.774 { 00:51:49.774 "pci_address": "0000:00:11.0", 00:51:49.774 "trid": { 00:51:49.774 "trtype": "PCIe", 00:51:49.774 "traddr": "0000:00:11.0" 00:51:49.774 }, 00:51:49.774 "ctrlr_data": { 00:51:49.774 "cntlid": 0, 00:51:49.774 "vendor_id": "0x1b36", 00:51:49.774 "model_number": "QEMU NVMe Ctrl", 00:51:49.774 "serial_number": "12341", 00:51:49.774 "firmware_revision": "8.0.0", 00:51:49.774 "subnqn": "nqn.2019-08.org.qemu:12341", 00:51:49.774 "oacs": { 00:51:49.774 "security": 0, 00:51:49.774 "format": 1, 00:51:49.774 "firmware": 0, 00:51:49.774 "ns_manage": 1 00:51:49.774 }, 00:51:49.774 "multi_ctrlr": false, 00:51:49.774 "ana_reporting": false 00:51:49.774 }, 00:51:49.775 "vs": { 00:51:49.775 "nvme_version": "1.4" 00:51:49.775 }, 00:51:49.775 "ns_data": { 00:51:49.775 "id": 1, 00:51:49.775 "can_share": false 00:51:49.775 } 00:51:49.775 } 00:51:49.775 ], 00:51:49.775 "mp_policy": "active_passive" 00:51:49.775 } 00:51:49.775 } 00:51:49.775 ]' 00:51:49.775 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:51:49.775 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:51:49.775 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:51:50.033 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:51:50.292 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=52e0b836-1818-4bd9-a822-90cf12fc6b0d 00:51:50.292 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:51:50.292 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52e0b836-1818-4bd9-a822-90cf12fc6b0d 00:51:50.552 17:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f8f7184d-70f9-46f8-a515-e2b802debca2 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f8f7184d-70f9-46f8-a515-e2b802debca2 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=c53580e2-6cb4-4f94-9881-591099c9aef7 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z c53580e2-6cb4-4f94-9881-591099c9aef7 ]] 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 c53580e2-6cb4-4f94-9881-591099c9aef7 5120 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=c53580e2-6cb4-4f94-9881-591099c9aef7 00:51:50.812 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size c53580e2-6cb4-4f94-9881-591099c9aef7 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c53580e2-6cb4-4f94-9881-591099c9aef7 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c53580e2-6cb4-4f94-9881-591099c9aef7 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:51:51.071 { 00:51:51.071 "name": "c53580e2-6cb4-4f94-9881-591099c9aef7", 00:51:51.071 "aliases": [ 00:51:51.071 "lvs/basen1p0" 00:51:51.071 ], 00:51:51.071 "product_name": "Logical Volume", 00:51:51.071 "block_size": 4096, 00:51:51.071 "num_blocks": 5242880, 00:51:51.071 "uuid": "c53580e2-6cb4-4f94-9881-591099c9aef7", 00:51:51.071 "assigned_rate_limits": { 00:51:51.071 "rw_ios_per_sec": 0, 00:51:51.071 "rw_mbytes_per_sec": 0, 00:51:51.071 "r_mbytes_per_sec": 0, 00:51:51.071 "w_mbytes_per_sec": 0 00:51:51.071 }, 00:51:51.071 "claimed": false, 00:51:51.071 "zoned": false, 00:51:51.071 "supported_io_types": { 00:51:51.071 "read": true, 00:51:51.071 "write": true, 00:51:51.071 "unmap": true, 00:51:51.071 "flush": false, 00:51:51.071 "reset": true, 00:51:51.071 "nvme_admin": false, 00:51:51.071 "nvme_io": false, 00:51:51.071 "nvme_io_md": false, 00:51:51.071 "write_zeroes": true, 00:51:51.071 "zcopy": false, 00:51:51.071 "get_zone_info": false, 00:51:51.071 "zone_management": false, 00:51:51.071 "zone_append": false, 00:51:51.071 "compare": false, 00:51:51.071 "compare_and_write": false, 00:51:51.071 "abort": false, 00:51:51.071 "seek_hole": true, 00:51:51.071 "seek_data": true, 00:51:51.071 "copy": false, 00:51:51.071 "nvme_iov_md": false 00:51:51.071 }, 00:51:51.071 "driver_specific": { 00:51:51.071 "lvol": { 00:51:51.071 "lvol_store_uuid": "f8f7184d-70f9-46f8-a515-e2b802debca2", 00:51:51.071 "base_bdev": "basen1", 00:51:51.071 "thin_provision": true, 00:51:51.071 "num_allocated_clusters": 0, 00:51:51.071 "snapshot": false, 00:51:51.071 "clone": false, 00:51:51.071 "esnap_clone": false 00:51:51.071 } 00:51:51.071 } 00:51:51.071 } 00:51:51.071 ]' 00:51:51.071 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:51:51.330 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:51:51.588 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:51:51.588 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:51:51.588 17:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:51:51.848 17:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:51:51.848 17:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:51:51.848 17:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d c53580e2-6cb4-4f94-9881-591099c9aef7 -c cachen1p0 --l2p_dram_limit 2 00:51:52.110 [2024-11-26 17:48:29.299140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.299233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:51:52.110 [2024-11-26 17:48:29.299253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:51:52.110 [2024-11-26 17:48:29.299262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.299348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.299358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:51:52.110 [2024-11-26 17:48:29.299369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:51:52.110 [2024-11-26 17:48:29.299377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.299401] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:51:52.110 [2024-11-26 17:48:29.300550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:51:52.110 [2024-11-26 17:48:29.300587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.300596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:51:52.110 [2024-11-26 17:48:29.300621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.192 ms 00:51:52.110 [2024-11-26 17:48:29.300630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.300713] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b8c69390-3108-4515-9c8f-ad8b4fa0e6bd 00:51:52.110 [2024-11-26 17:48:29.303302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.303344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:51:52.110 [2024-11-26 17:48:29.303356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:51:52.110 [2024-11-26 17:48:29.303366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.317901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.317944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:51:52.110 [2024-11-26 17:48:29.317959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.498 ms 00:51:52.110 [2024-11-26 17:48:29.317971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.318034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.318054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:51:52.110 [2024-11-26 17:48:29.318065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:51:52.110 [2024-11-26 17:48:29.318080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.318166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.318180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:51:52.110 [2024-11-26 17:48:29.318194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:51:52.110 [2024-11-26 17:48:29.318207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.318236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:51:52.110 [2024-11-26 17:48:29.324665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.324722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:51:52.110 [2024-11-26 17:48:29.324740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.446 ms 00:51:52.110 [2024-11-26 17:48:29.324748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.324786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.324796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:51:52.110 [2024-11-26 17:48:29.324807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:51:52.110 [2024-11-26 17:48:29.324815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.324853] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:51:52.110 [2024-11-26 17:48:29.324993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:51:52.110 [2024-11-26 17:48:29.325018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:51:52.110 [2024-11-26 17:48:29.325030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:51:52.110 [2024-11-26 17:48:29.325043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325053] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325064] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:51:52.110 [2024-11-26 17:48:29.325075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:51:52.110 [2024-11-26 17:48:29.325085] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:51:52.110 [2024-11-26 17:48:29.325093] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:51:52.110 [2024-11-26 17:48:29.325105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.325113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:51:52.110 [2024-11-26 17:48:29.325124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:51:52.110 [2024-11-26 17:48:29.325131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.325208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.110 [2024-11-26 17:48:29.325237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:51:52.110 [2024-11-26 17:48:29.325249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:51:52.110 [2024-11-26 17:48:29.325257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.110 [2024-11-26 17:48:29.325366] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:51:52.110 [2024-11-26 17:48:29.325381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:51:52.110 [2024-11-26 17:48:29.325393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:51:52.110 [2024-11-26 17:48:29.325420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:51:52.110 [2024-11-26 17:48:29.325439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:51:52.110 [2024-11-26 17:48:29.325450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:51:52.110 [2024-11-26 17:48:29.325457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:51:52.110 [2024-11-26 17:48:29.325474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:51:52.110 [2024-11-26 17:48:29.325484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:51:52.110 [2024-11-26 17:48:29.325500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:51:52.110 [2024-11-26 17:48:29.325507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:51:52.110 [2024-11-26 17:48:29.325526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:51:52.110 [2024-11-26 17:48:29.325535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:51:52.110 [2024-11-26 17:48:29.325552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:51:52.110 [2024-11-26 17:48:29.325568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:51:52.110 [2024-11-26 17:48:29.325601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:51:52.110 [2024-11-26 17:48:29.325612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:51:52.110 [2024-11-26 17:48:29.325642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:51:52.110 [2024-11-26 17:48:29.325650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:51:52.110 [2024-11-26 17:48:29.325669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:51:52.110 [2024-11-26 17:48:29.325680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:51:52.110 [2024-11-26 17:48:29.325701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:51:52.110 [2024-11-26 17:48:29.325708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:51:52.110 [2024-11-26 17:48:29.325726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:51:52.110 [2024-11-26 17:48:29.325736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.110 [2024-11-26 17:48:29.325744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:51:52.111 [2024-11-26 17:48:29.325753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:51:52.111 [2024-11-26 17:48:29.325761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.111 [2024-11-26 17:48:29.325772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:51:52.111 [2024-11-26 17:48:29.325779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:51:52.111 [2024-11-26 17:48:29.325789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.111 [2024-11-26 17:48:29.325797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:51:52.111 [2024-11-26 17:48:29.325809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:51:52.111 [2024-11-26 17:48:29.325818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:52.111 [2024-11-26 17:48:29.325831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.111 [2024-11-26 17:48:29.325840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:51:52.111 [2024-11-26 17:48:29.325853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:51:52.111 [2024-11-26 17:48:29.325861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:51:52.111 [2024-11-26 17:48:29.325872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:51:52.111 [2024-11-26 17:48:29.325879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:51:52.111 [2024-11-26 17:48:29.325890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:51:52.111 [2024-11-26 17:48:29.325904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:51:52.111 [2024-11-26 17:48:29.325921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.325932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:51:52.111 [2024-11-26 17:48:29.325944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.325952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.325963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:51:52.111 [2024-11-26 17:48:29.325972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:51:52.111 [2024-11-26 17:48:29.325983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:51:52.111 [2024-11-26 17:48:29.325991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:51:52.111 [2024-11-26 17:48:29.326003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:51:52.111 [2024-11-26 17:48:29.326072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:51:52.111 [2024-11-26 17:48:29.326084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:52.111 [2024-11-26 17:48:29.326105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:51:52.111 [2024-11-26 17:48:29.326114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:51:52.111 [2024-11-26 17:48:29.326125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:51:52.111 [2024-11-26 17:48:29.326135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.111 [2024-11-26 17:48:29.326146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:51:52.111 [2024-11-26 17:48:29.326156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:51:52.111 [2024-11-26 17:48:29.326167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.111 [2024-11-26 17:48:29.326218] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:51:52.111 [2024-11-26 17:48:29.326235] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:51:55.417 [2024-11-26 17:48:32.677031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.677131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:51:55.417 [2024-11-26 17:48:32.677148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3357.273 ms 00:51:55.417 [2024-11-26 17:48:32.677160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.725437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.725507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:51:55.417 [2024-11-26 17:48:32.725523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.046 ms 00:51:55.417 [2024-11-26 17:48:32.725534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.725721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.725739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:51:55.417 [2024-11-26 17:48:32.725750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:51:55.417 [2024-11-26 17:48:32.725768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.778072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.778136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:51:55.417 [2024-11-26 17:48:32.778150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.361 ms 00:51:55.417 [2024-11-26 17:48:32.778161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.778218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.778231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:51:55.417 [2024-11-26 17:48:32.778241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:51:55.417 [2024-11-26 17:48:32.778252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.779133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.779164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:51:55.417 [2024-11-26 17:48:32.779185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.781 ms 00:51:55.417 [2024-11-26 17:48:32.779198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.779243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.779259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:51:55.417 [2024-11-26 17:48:32.779268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:51:55.417 [2024-11-26 17:48:32.779282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.805180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.805243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:51:55.417 [2024-11-26 17:48:32.805256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.925 ms 00:51:55.417 [2024-11-26 17:48:32.805267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.417 [2024-11-26 17:48:32.832621] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:51:55.417 [2024-11-26 17:48:32.834425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.417 [2024-11-26 17:48:32.834450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:51:55.418 [2024-11-26 17:48:32.834464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.096 ms 00:51:55.418 [2024-11-26 17:48:32.834473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:32.868730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:32.868781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:51:55.679 [2024-11-26 17:48:32.868797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.275 ms 00:51:55.679 [2024-11-26 17:48:32.868805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:32.868924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:32.868934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:51:55.679 [2024-11-26 17:48:32.868951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:51:55.679 [2024-11-26 17:48:32.868958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:32.904741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:32.904778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:51:55.679 [2024-11-26 17:48:32.904809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.799 ms 00:51:55.679 [2024-11-26 17:48:32.904817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:32.940230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:32.940264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:51:55.679 [2024-11-26 17:48:32.940279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.434 ms 00:51:55.679 [2024-11-26 17:48:32.940288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:32.941087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:32.941115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:51:55.679 [2024-11-26 17:48:32.941132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.758 ms 00:51:55.679 [2024-11-26 17:48:32.941141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:33.044902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:33.044981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:51:55.679 [2024-11-26 17:48:33.045005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 103.901 ms 00:51:55.679 [2024-11-26 17:48:33.045014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:33.084601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:33.084659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:51:55.679 [2024-11-26 17:48:33.084675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.570 ms 00:51:55.679 [2024-11-26 17:48:33.084692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.679 [2024-11-26 17:48:33.122128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.679 [2024-11-26 17:48:33.122178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:51:55.679 [2024-11-26 17:48:33.122195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.459 ms 00:51:55.679 [2024-11-26 17:48:33.122203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.939 [2024-11-26 17:48:33.160967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.939 [2024-11-26 17:48:33.161009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:51:55.940 [2024-11-26 17:48:33.161025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.788 ms 00:51:55.940 [2024-11-26 17:48:33.161033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.940 [2024-11-26 17:48:33.161084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.940 [2024-11-26 17:48:33.161095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:51:55.940 [2024-11-26 17:48:33.161110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:51:55.940 [2024-11-26 17:48:33.161120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.940 [2024-11-26 17:48:33.161226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.940 [2024-11-26 17:48:33.161240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:51:55.940 [2024-11-26 17:48:33.161258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:51:55.940 [2024-11-26 17:48:33.161266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.940 [2024-11-26 17:48:33.162789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3870.559 ms, result 0 00:51:55.940 { 00:51:55.940 "name": "ftl", 00:51:55.940 "uuid": "b8c69390-3108-4515-9c8f-ad8b4fa0e6bd" 00:51:55.940 } 00:51:55.940 17:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:51:56.200 [2024-11-26 17:48:33.385143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:56.200 17:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:51:56.200 17:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:51:56.459 [2024-11-26 17:48:33.764817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:51:56.459 17:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:51:56.718 [2024-11-26 17:48:33.979567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:51:56.718 17:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:51:56.977 Fill FTL, iteration 1 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83940 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83940 /var/tmp/spdk.tgt.sock 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83940 ']' 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:51:56.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:56.977 17:48:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:57.237 [2024-11-26 17:48:34.446574] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:51:57.237 [2024-11-26 17:48:34.446825] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83940 ] 00:51:57.237 [2024-11-26 17:48:34.631104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:57.496 [2024-11-26 17:48:34.774683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:58.455 17:48:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:58.455 17:48:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:51:58.455 17:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:51:58.715 ftln1 00:51:58.975 17:48:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:51:58.975 17:48:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83940 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83940 ']' 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83940 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83940 00:51:59.235 killing process with pid 83940 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83940' 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83940 00:51:59.235 17:48:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83940 00:52:02.529 17:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:52:02.529 17:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:52:02.529 [2024-11-26 17:48:39.342271] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:02.529 [2024-11-26 17:48:39.342490] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83998 ] 00:52:02.529 [2024-11-26 17:48:39.524128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:02.529 [2024-11-26 17:48:39.672355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:03.930  [2024-11-26T17:48:42.312Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-26T17:48:43.250Z] Copying: 461/1024 [MB] (236 MBps) [2024-11-26T17:48:44.625Z] Copying: 694/1024 [MB] (233 MBps) [2024-11-26T17:48:44.883Z] Copying: 917/1024 [MB] (223 MBps) [2024-11-26T17:48:46.258Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:52:08.812 00:52:08.812 Calculate MD5 checksum, iteration 1 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:08.812 17:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:08.812 [2024-11-26 17:48:46.104967] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:08.812 [2024-11-26 17:48:46.105230] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84068 ] 00:52:09.071 [2024-11-26 17:48:46.290730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:09.071 [2024-11-26 17:48:46.444605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:11.045  [2024-11-26T17:48:49.061Z] Copying: 507/1024 [MB] (507 MBps) [2024-11-26T17:48:49.998Z] Copying: 1024/1024 [MB] (average 524 MBps) 00:52:12.552 00:52:12.811 17:48:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:52:12.811 17:48:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ec22201c9ba7ffd6ef7b5ad822a9c9b5 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:52:14.719 Fill FTL, iteration 2 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:14.719 17:48:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:52:14.719 [2024-11-26 17:48:52.008009] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:14.719 [2024-11-26 17:48:52.009068] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84136 ] 00:52:14.979 [2024-11-26 17:48:52.207866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:14.979 [2024-11-26 17:48:52.361459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:16.889  [2024-11-26T17:48:55.275Z] Copying: 234/1024 [MB] (234 MBps) [2024-11-26T17:48:56.213Z] Copying: 469/1024 [MB] (235 MBps) [2024-11-26T17:48:57.156Z] Copying: 702/1024 [MB] (233 MBps) [2024-11-26T17:48:57.415Z] Copying: 928/1024 [MB] (226 MBps) [2024-11-26T17:48:58.795Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:52:21.349 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:52:21.349 Calculate MD5 checksum, iteration 2 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:21.349 17:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:21.349 [2024-11-26 17:48:58.729728] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:21.349 [2024-11-26 17:48:58.730606] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84200 ] 00:52:21.609 [2024-11-26 17:48:58.907644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:21.609 [2024-11-26 17:48:59.031758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:23.525  [2024-11-26T17:49:01.911Z] Copying: 519/1024 [MB] (519 MBps) [2024-11-26T17:49:03.306Z] Copying: 1024/1024 [MB] (average 516 MBps) 00:52:25.860 00:52:25.860 17:49:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:52:25.860 17:49:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:27.768 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:52:27.768 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a69825ade26b33cd01f7a0e013f82819 00:52:27.768 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:52:27.768 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:52:27.768 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:52:28.028 [2024-11-26 17:49:05.213937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.028 [2024-11-26 17:49:05.214012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:28.028 [2024-11-26 17:49:05.214031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:52:28.028 [2024-11-26 17:49:05.214041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.028 [2024-11-26 17:49:05.214074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.028 [2024-11-26 17:49:05.214090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:28.028 [2024-11-26 17:49:05.214100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:28.028 [2024-11-26 17:49:05.214109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.028 [2024-11-26 17:49:05.214131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.028 [2024-11-26 17:49:05.214141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:28.028 [2024-11-26 17:49:05.214151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:28.028 [2024-11-26 17:49:05.214159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.028 [2024-11-26 17:49:05.214239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.294 ms, result 0 00:52:28.028 true 00:52:28.028 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:28.028 { 00:52:28.028 "name": "ftl", 00:52:28.028 "properties": [ 00:52:28.028 { 00:52:28.028 "name": "superblock_version", 00:52:28.028 "value": 5, 00:52:28.028 "read-only": true 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "name": "base_device", 00:52:28.028 "bands": [ 00:52:28.028 { 00:52:28.028 "id": 0, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 1, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 2, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 3, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 4, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 5, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 6, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 7, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 8, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 9, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 10, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 11, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 12, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 13, 00:52:28.028 "state": "FREE", 00:52:28.028 "validity": 0.0 00:52:28.028 }, 00:52:28.028 { 00:52:28.028 "id": 14, 00:52:28.029 "state": "FREE", 00:52:28.029 "validity": 0.0 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 15, 00:52:28.029 "state": "FREE", 00:52:28.029 "validity": 0.0 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 16, 00:52:28.029 "state": "FREE", 00:52:28.029 "validity": 0.0 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 17, 00:52:28.029 "state": "FREE", 00:52:28.029 "validity": 0.0 00:52:28.029 } 00:52:28.029 ], 00:52:28.029 "read-only": true 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "name": "cache_device", 00:52:28.029 "type": "bdev", 00:52:28.029 "chunks": [ 00:52:28.029 { 00:52:28.029 "id": 0, 00:52:28.029 "state": "INACTIVE", 00:52:28.029 "utilization": 0.0 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 1, 00:52:28.029 "state": "CLOSED", 00:52:28.029 "utilization": 1.0 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 2, 00:52:28.029 "state": "CLOSED", 00:52:28.029 "utilization": 1.0 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 3, 00:52:28.029 "state": "OPEN", 00:52:28.029 "utilization": 0.001953125 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "id": 4, 00:52:28.029 "state": "OPEN", 00:52:28.029 "utilization": 0.0 00:52:28.029 } 00:52:28.029 ], 00:52:28.029 "read-only": true 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "name": "verbose_mode", 00:52:28.029 "value": true, 00:52:28.029 "unit": "", 00:52:28.029 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:52:28.029 }, 00:52:28.029 { 00:52:28.029 "name": "prep_upgrade_on_shutdown", 00:52:28.029 "value": false, 00:52:28.029 "unit": "", 00:52:28.029 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:52:28.029 } 00:52:28.029 ] 00:52:28.029 } 00:52:28.029 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:52:28.289 [2024-11-26 17:49:05.637917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.289 [2024-11-26 17:49:05.637991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:28.289 [2024-11-26 17:49:05.638007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:52:28.289 [2024-11-26 17:49:05.638016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.289 [2024-11-26 17:49:05.638047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.289 [2024-11-26 17:49:05.638057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:28.289 [2024-11-26 17:49:05.638066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:28.289 [2024-11-26 17:49:05.638074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.289 [2024-11-26 17:49:05.638093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.289 [2024-11-26 17:49:05.638101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:28.289 [2024-11-26 17:49:05.638110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:52:28.289 [2024-11-26 17:49:05.638118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.289 [2024-11-26 17:49:05.638187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.273 ms, result 0 00:52:28.289 true 00:52:28.289 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:52:28.289 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:28.289 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:52:28.549 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:52:28.549 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:52:28.549 17:49:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:52:28.809 [2024-11-26 17:49:06.133911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.809 [2024-11-26 17:49:06.134047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:28.809 [2024-11-26 17:49:06.134089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:52:28.809 [2024-11-26 17:49:06.134114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.809 [2024-11-26 17:49:06.134200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.809 [2024-11-26 17:49:06.134261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:28.809 [2024-11-26 17:49:06.134288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:52:28.809 [2024-11-26 17:49:06.134326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.809 [2024-11-26 17:49:06.134363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.809 [2024-11-26 17:49:06.134389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:28.809 [2024-11-26 17:49:06.134433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:28.809 [2024-11-26 17:49:06.134457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.809 [2024-11-26 17:49:06.134564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.644 ms, result 0 00:52:28.809 true 00:52:28.809 17:49:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:29.070 { 00:52:29.070 "name": "ftl", 00:52:29.070 "properties": [ 00:52:29.070 { 00:52:29.070 "name": "superblock_version", 00:52:29.070 "value": 5, 00:52:29.070 "read-only": true 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "name": "base_device", 00:52:29.070 "bands": [ 00:52:29.070 { 00:52:29.070 "id": 0, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 1, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 2, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 3, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 4, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 5, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 6, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 7, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 8, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 9, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 10, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 11, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 12, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 13, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 14, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 15, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 16, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 17, 00:52:29.070 "state": "FREE", 00:52:29.070 "validity": 0.0 00:52:29.070 } 00:52:29.070 ], 00:52:29.070 "read-only": true 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "name": "cache_device", 00:52:29.070 "type": "bdev", 00:52:29.070 "chunks": [ 00:52:29.070 { 00:52:29.070 "id": 0, 00:52:29.070 "state": "INACTIVE", 00:52:29.070 "utilization": 0.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 1, 00:52:29.070 "state": "CLOSED", 00:52:29.070 "utilization": 1.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 2, 00:52:29.070 "state": "CLOSED", 00:52:29.070 "utilization": 1.0 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 3, 00:52:29.070 "state": "OPEN", 00:52:29.070 "utilization": 0.001953125 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "id": 4, 00:52:29.070 "state": "OPEN", 00:52:29.070 "utilization": 0.0 00:52:29.070 } 00:52:29.070 ], 00:52:29.070 "read-only": true 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "name": "verbose_mode", 00:52:29.070 "value": true, 00:52:29.070 "unit": "", 00:52:29.070 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:52:29.070 }, 00:52:29.070 { 00:52:29.070 "name": "prep_upgrade_on_shutdown", 00:52:29.070 "value": true, 00:52:29.070 "unit": "", 00:52:29.070 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:52:29.070 } 00:52:29.070 ] 00:52:29.070 } 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83812 ]] 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83812 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83812 ']' 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83812 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83812 00:52:29.070 killing process with pid 83812 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83812' 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83812 00:52:29.070 17:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83812 00:52:30.446 [2024-11-26 17:49:07.701459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:52:30.446 [2024-11-26 17:49:07.723133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.446 [2024-11-26 17:49:07.723194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:52:30.446 [2024-11-26 17:49:07.723210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:52:30.446 [2024-11-26 17:49:07.723235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.446 [2024-11-26 17:49:07.723259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:52:30.446 [2024-11-26 17:49:07.727980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.446 [2024-11-26 17:49:07.728008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:52:30.446 [2024-11-26 17:49:07.728019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.715 ms 00:52:30.446 [2024-11-26 17:49:07.728032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.570 [2024-11-26 17:49:15.659178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.570 [2024-11-26 17:49:15.659426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:52:38.570 [2024-11-26 17:49:15.659454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7946.407 ms 00:52:38.570 [2024-11-26 17:49:15.659474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.570 [2024-11-26 17:49:15.660744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.570 [2024-11-26 17:49:15.660784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:52:38.570 [2024-11-26 17:49:15.660798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.248 ms 00:52:38.570 [2024-11-26 17:49:15.660808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.570 [2024-11-26 17:49:15.662010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.570 [2024-11-26 17:49:15.662043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:52:38.570 [2024-11-26 17:49:15.662056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.167 ms 00:52:38.570 [2024-11-26 17:49:15.662066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.570 [2024-11-26 17:49:15.681518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.681790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:52:38.571 [2024-11-26 17:49:15.681815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.421 ms 00:52:38.571 [2024-11-26 17:49:15.681827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.693119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.693352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:52:38.571 [2024-11-26 17:49:15.693377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.222 ms 00:52:38.571 [2024-11-26 17:49:15.693388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.693601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.693635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:52:38.571 [2024-11-26 17:49:15.693661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:52:38.571 [2024-11-26 17:49:15.693671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.713623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.713725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:52:38.571 [2024-11-26 17:49:15.713745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.943 ms 00:52:38.571 [2024-11-26 17:49:15.713756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.733082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.733187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:52:38.571 [2024-11-26 17:49:15.733208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.277 ms 00:52:38.571 [2024-11-26 17:49:15.733217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.751847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.752085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:52:38.571 [2024-11-26 17:49:15.752108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.574 ms 00:52:38.571 [2024-11-26 17:49:15.752118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.772035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.772141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:52:38.571 [2024-11-26 17:49:15.772158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.727 ms 00:52:38.571 [2024-11-26 17:49:15.772167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.772268] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:52:38.571 [2024-11-26 17:49:15.772329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:38.571 [2024-11-26 17:49:15.772342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:52:38.571 [2024-11-26 17:49:15.772353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:52:38.571 [2024-11-26 17:49:15.772363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:38.571 [2024-11-26 17:49:15.772513] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:52:38.571 [2024-11-26 17:49:15.772523] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b8c69390-3108-4515-9c8f-ad8b4fa0e6bd 00:52:38.571 [2024-11-26 17:49:15.772533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:52:38.571 [2024-11-26 17:49:15.772542] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:52:38.571 [2024-11-26 17:49:15.772552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:52:38.571 [2024-11-26 17:49:15.772562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:52:38.571 [2024-11-26 17:49:15.772571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:52:38.571 [2024-11-26 17:49:15.772589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:52:38.571 [2024-11-26 17:49:15.772600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:52:38.571 [2024-11-26 17:49:15.772629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:52:38.571 [2024-11-26 17:49:15.772640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:52:38.571 [2024-11-26 17:49:15.772651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.772666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:52:38.571 [2024-11-26 17:49:15.772679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:52:38.571 [2024-11-26 17:49:15.772690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.798777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.798873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:52:38.571 [2024-11-26 17:49:15.798890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.025 ms 00:52:38.571 [2024-11-26 17:49:15.798916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.799675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:38.571 [2024-11-26 17:49:15.799699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:52:38.571 [2024-11-26 17:49:15.799712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.699 ms 00:52:38.571 [2024-11-26 17:49:15.799722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.885462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.571 [2024-11-26 17:49:15.885721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:38.571 [2024-11-26 17:49:15.885755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.571 [2024-11-26 17:49:15.885766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.885841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.571 [2024-11-26 17:49:15.885852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:38.571 [2024-11-26 17:49:15.885862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.571 [2024-11-26 17:49:15.885871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.886034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.571 [2024-11-26 17:49:15.886051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:38.571 [2024-11-26 17:49:15.886061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.571 [2024-11-26 17:49:15.886077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.571 [2024-11-26 17:49:15.886101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.571 [2024-11-26 17:49:15.886128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:38.571 [2024-11-26 17:49:15.886138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.571 [2024-11-26 17:49:15.886148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.052178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.052275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:38.830 [2024-11-26 17:49:16.052291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.052319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.183197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.183433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:38.830 [2024-11-26 17:49:16.183454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.183466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.183641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.183655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:38.830 [2024-11-26 17:49:16.183666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.183678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.183766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.183779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:38.830 [2024-11-26 17:49:16.183789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.183798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.183951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.183966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:38.830 [2024-11-26 17:49:16.183977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.183986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.184037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.184057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:52:38.830 [2024-11-26 17:49:16.184067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.184077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.184130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.184142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:38.830 [2024-11-26 17:49:16.184152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.184162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.184222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:38.830 [2024-11-26 17:49:16.184235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:38.830 [2024-11-26 17:49:16.184244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:38.830 [2024-11-26 17:49:16.184253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:38.830 [2024-11-26 17:49:16.184412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8477.577 ms, result 0 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84430 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84430 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84430 ']' 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:45.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:45.399 17:49:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:45.399 [2024-11-26 17:49:22.686419] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:45.400 [2024-11-26 17:49:22.686595] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84430 ] 00:52:45.660 [2024-11-26 17:49:22.858497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:45.660 [2024-11-26 17:49:23.023924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:47.039 [2024-11-26 17:49:24.308750] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:47.039 [2024-11-26 17:49:24.308858] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:47.039 [2024-11-26 17:49:24.462283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.039 [2024-11-26 17:49:24.462375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:52:47.039 [2024-11-26 17:49:24.462393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:52:47.039 [2024-11-26 17:49:24.462404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.039 [2024-11-26 17:49:24.462511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.039 [2024-11-26 17:49:24.462527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:47.039 [2024-11-26 17:49:24.462537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:52:47.039 [2024-11-26 17:49:24.462547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.039 [2024-11-26 17:49:24.462577] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:52:47.039 [2024-11-26 17:49:24.463909] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:52:47.039 [2024-11-26 17:49:24.464028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.039 [2024-11-26 17:49:24.464047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:47.039 [2024-11-26 17:49:24.464059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.460 ms 00:52:47.039 [2024-11-26 17:49:24.464068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.039 [2024-11-26 17:49:24.466840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:52:47.299 [2024-11-26 17:49:24.494003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.494119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:52:47.299 [2024-11-26 17:49:24.494138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.208 ms 00:52:47.299 [2024-11-26 17:49:24.494148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.494311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.494325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:52:47.299 [2024-11-26 17:49:24.494336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:52:47.299 [2024-11-26 17:49:24.494345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.509617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.509690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:47.299 [2024-11-26 17:49:24.509707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.132 ms 00:52:47.299 [2024-11-26 17:49:24.509717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.509853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.509875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:47.299 [2024-11-26 17:49:24.509886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:52:47.299 [2024-11-26 17:49:24.509896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.510011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.510029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:52:47.299 [2024-11-26 17:49:24.510040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:52:47.299 [2024-11-26 17:49:24.510049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.510084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:52:47.299 [2024-11-26 17:49:24.517334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.517511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:47.299 [2024-11-26 17:49:24.517545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.273 ms 00:52:47.299 [2024-11-26 17:49:24.517558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.517635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.299 [2024-11-26 17:49:24.517647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:52:47.299 [2024-11-26 17:49:24.517659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:52:47.299 [2024-11-26 17:49:24.517668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.299 [2024-11-26 17:49:24.517740] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:52:47.299 [2024-11-26 17:49:24.517774] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:52:47.300 [2024-11-26 17:49:24.517821] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:52:47.300 [2024-11-26 17:49:24.517840] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:52:47.300 [2024-11-26 17:49:24.517952] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:52:47.300 [2024-11-26 17:49:24.517964] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:52:47.300 [2024-11-26 17:49:24.517977] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:52:47.300 [2024-11-26 17:49:24.517990] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518005] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:52:47.300 [2024-11-26 17:49:24.518025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:52:47.300 [2024-11-26 17:49:24.518034] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:52:47.300 [2024-11-26 17:49:24.518043] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:52:47.300 [2024-11-26 17:49:24.518053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.300 [2024-11-26 17:49:24.518063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:52:47.300 [2024-11-26 17:49:24.518073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:52:47.300 [2024-11-26 17:49:24.518082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.300 [2024-11-26 17:49:24.518180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.300 [2024-11-26 17:49:24.518191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:52:47.300 [2024-11-26 17:49:24.518204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:52:47.300 [2024-11-26 17:49:24.518213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.300 [2024-11-26 17:49:24.518328] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:52:47.300 [2024-11-26 17:49:24.518341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:52:47.300 [2024-11-26 17:49:24.518351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:52:47.300 [2024-11-26 17:49:24.518383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:52:47.300 [2024-11-26 17:49:24.518400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:52:47.300 [2024-11-26 17:49:24.518410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:52:47.300 [2024-11-26 17:49:24.518418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:52:47.300 [2024-11-26 17:49:24.518435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:52:47.300 [2024-11-26 17:49:24.518443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:52:47.300 [2024-11-26 17:49:24.518460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:52:47.300 [2024-11-26 17:49:24.518468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:52:47.300 [2024-11-26 17:49:24.518485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:52:47.300 [2024-11-26 17:49:24.518493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:52:47.300 [2024-11-26 17:49:24.518510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:52:47.300 [2024-11-26 17:49:24.518518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:52:47.300 [2024-11-26 17:49:24.518559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:52:47.300 [2024-11-26 17:49:24.518568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:52:47.300 [2024-11-26 17:49:24.518585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:52:47.300 [2024-11-26 17:49:24.518594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:52:47.300 [2024-11-26 17:49:24.518625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:52:47.300 [2024-11-26 17:49:24.518640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:52:47.300 [2024-11-26 17:49:24.518657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:52:47.300 [2024-11-26 17:49:24.518666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:52:47.300 [2024-11-26 17:49:24.518682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:52:47.300 [2024-11-26 17:49:24.518714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:52:47.300 [2024-11-26 17:49:24.518738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:52:47.300 [2024-11-26 17:49:24.518746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518754] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:52:47.300 [2024-11-26 17:49:24.518764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:52:47.300 [2024-11-26 17:49:24.518775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:47.300 [2024-11-26 17:49:24.518807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:52:47.300 [2024-11-26 17:49:24.518820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:52:47.300 [2024-11-26 17:49:24.518833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:52:47.300 [2024-11-26 17:49:24.518842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:52:47.300 [2024-11-26 17:49:24.518851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:52:47.300 [2024-11-26 17:49:24.518865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:52:47.300 [2024-11-26 17:49:24.518880] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:52:47.300 [2024-11-26 17:49:24.518898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.518914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:52:47.300 [2024-11-26 17:49:24.518925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.518938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.518953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:52:47.300 [2024-11-26 17:49:24.518965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:52:47.300 [2024-11-26 17:49:24.518976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:52:47.300 [2024-11-26 17:49:24.518990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:52:47.300 [2024-11-26 17:49:24.519002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:52:47.300 [2024-11-26 17:49:24.519093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:52:47.300 [2024-11-26 17:49:24.519107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:47.300 [2024-11-26 17:49:24.519134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:52:47.300 [2024-11-26 17:49:24.519146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:52:47.300 [2024-11-26 17:49:24.519159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:52:47.300 [2024-11-26 17:49:24.519174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.300 [2024-11-26 17:49:24.519187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:52:47.301 [2024-11-26 17:49:24.519203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:52:47.301 [2024-11-26 17:49:24.519214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.301 [2024-11-26 17:49:24.519303] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:52:47.301 [2024-11-26 17:49:24.519330] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:52:49.833 [2024-11-26 17:49:27.055615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.055827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:52:49.833 [2024-11-26 17:49:27.055868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2541.201 ms 00:52:49.833 [2024-11-26 17:49:27.055892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.108085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.108267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:49.833 [2024-11-26 17:49:27.108307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.797 ms 00:52:49.833 [2024-11-26 17:49:27.108331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.108532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.108573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:52:49.833 [2024-11-26 17:49:27.108615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:52:49.833 [2024-11-26 17:49:27.108645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.165948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.166163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:49.833 [2024-11-26 17:49:27.166208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.335 ms 00:52:49.833 [2024-11-26 17:49:27.166231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.166349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.166389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:49.833 [2024-11-26 17:49:27.166437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:52:49.833 [2024-11-26 17:49:27.166467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.167431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.167494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:49.833 [2024-11-26 17:49:27.167525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 00:52:49.833 [2024-11-26 17:49:27.167559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.167638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.167673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:49.833 [2024-11-26 17:49:27.167702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:52:49.833 [2024-11-26 17:49:27.167731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.195127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.195304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:49.833 [2024-11-26 17:49:27.195341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.397 ms 00:52:49.833 [2024-11-26 17:49:27.195363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.238156] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:52:49.833 [2024-11-26 17:49:27.238381] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:52:49.833 [2024-11-26 17:49:27.238437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.238464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:52:49.833 [2024-11-26 17:49:27.238492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.935 ms 00:52:49.833 [2024-11-26 17:49:27.238515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:49.833 [2024-11-26 17:49:27.265052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:49.833 [2024-11-26 17:49:27.265261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:52:49.833 [2024-11-26 17:49:27.265282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.431 ms 00:52:49.833 [2024-11-26 17:49:27.265292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.290056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.290152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:52:50.093 [2024-11-26 17:49:27.290171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.687 ms 00:52:50.093 [2024-11-26 17:49:27.290180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.314090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.314181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:52:50.093 [2024-11-26 17:49:27.314199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.839 ms 00:52:50.093 [2024-11-26 17:49:27.314209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.315264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.315396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:52:50.093 [2024-11-26 17:49:27.315413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.815 ms 00:52:50.093 [2024-11-26 17:49:27.315424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.425962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.426066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:52:50.093 [2024-11-26 17:49:27.426084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 110.709 ms 00:52:50.093 [2024-11-26 17:49:27.426095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.444906] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:52:50.093 [2024-11-26 17:49:27.447139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.447184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:52:50.093 [2024-11-26 17:49:27.447200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.963 ms 00:52:50.093 [2024-11-26 17:49:27.447211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.447396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.447412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:52:50.093 [2024-11-26 17:49:27.447439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:52:50.093 [2024-11-26 17:49:27.447449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.447521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.447533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:52:50.093 [2024-11-26 17:49:27.447543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:52:50.093 [2024-11-26 17:49:27.447552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.447593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.447603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:52:50.093 [2024-11-26 17:49:27.447615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:52:50.093 [2024-11-26 17:49:27.447624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.447678] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:52:50.093 [2024-11-26 17:49:27.447690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.447698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:52:50.093 [2024-11-26 17:49:27.447707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:52:50.093 [2024-11-26 17:49:27.447716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.495631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.495740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:52:50.093 [2024-11-26 17:49:27.495759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.968 ms 00:52:50.093 [2024-11-26 17:49:27.495769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.495924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.093 [2024-11-26 17:49:27.495937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:52:50.093 [2024-11-26 17:49:27.495947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:52:50.093 [2024-11-26 17:49:27.495956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.093 [2024-11-26 17:49:27.497812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3040.741 ms, result 0 00:52:50.093 [2024-11-26 17:49:27.512154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:50.093 [2024-11-26 17:49:27.528178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:52:50.352 [2024-11-26 17:49:27.539516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:50.352 17:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:50.352 17:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:52:50.352 17:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:50.352 17:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:52:50.352 17:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:52:50.612 [2024-11-26 17:49:27.830989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.612 [2024-11-26 17:49:27.831073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:50.612 [2024-11-26 17:49:27.831097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:52:50.612 [2024-11-26 17:49:27.831107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.612 [2024-11-26 17:49:27.831142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.612 [2024-11-26 17:49:27.831154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:50.612 [2024-11-26 17:49:27.831164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:50.612 [2024-11-26 17:49:27.831173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.612 [2024-11-26 17:49:27.831195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:50.612 [2024-11-26 17:49:27.831205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:50.612 [2024-11-26 17:49:27.831214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:52:50.612 [2024-11-26 17:49:27.831223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:50.612 [2024-11-26 17:49:27.831300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.319 ms, result 0 00:52:50.612 true 00:52:50.612 17:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:50.871 { 00:52:50.871 "name": "ftl", 00:52:50.871 "properties": [ 00:52:50.871 { 00:52:50.871 "name": "superblock_version", 00:52:50.871 "value": 5, 00:52:50.871 "read-only": true 00:52:50.871 }, 00:52:50.871 { 00:52:50.871 "name": "base_device", 00:52:50.871 "bands": [ 00:52:50.871 { 00:52:50.871 "id": 0, 00:52:50.871 "state": "CLOSED", 00:52:50.871 "validity": 1.0 00:52:50.871 }, 00:52:50.871 { 00:52:50.871 "id": 1, 00:52:50.871 "state": "CLOSED", 00:52:50.871 "validity": 1.0 00:52:50.871 }, 00:52:50.871 { 00:52:50.871 "id": 2, 00:52:50.871 "state": "CLOSED", 00:52:50.871 "validity": 0.007843137254901933 00:52:50.871 }, 00:52:50.871 { 00:52:50.871 "id": 3, 00:52:50.871 "state": "FREE", 00:52:50.871 "validity": 0.0 00:52:50.871 }, 00:52:50.871 { 00:52:50.871 "id": 4, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 5, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 6, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 7, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 8, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 9, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 10, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 11, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 12, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 13, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 14, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 15, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 16, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 17, 00:52:50.872 "state": "FREE", 00:52:50.872 "validity": 0.0 00:52:50.872 } 00:52:50.872 ], 00:52:50.872 "read-only": true 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "name": "cache_device", 00:52:50.872 "type": "bdev", 00:52:50.872 "chunks": [ 00:52:50.872 { 00:52:50.872 "id": 0, 00:52:50.872 "state": "INACTIVE", 00:52:50.872 "utilization": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 1, 00:52:50.872 "state": "OPEN", 00:52:50.872 "utilization": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 2, 00:52:50.872 "state": "OPEN", 00:52:50.872 "utilization": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 3, 00:52:50.872 "state": "FREE", 00:52:50.872 "utilization": 0.0 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "id": 4, 00:52:50.872 "state": "FREE", 00:52:50.872 "utilization": 0.0 00:52:50.872 } 00:52:50.872 ], 00:52:50.872 "read-only": true 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "name": "verbose_mode", 00:52:50.872 "value": true, 00:52:50.872 "unit": "", 00:52:50.872 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:52:50.872 }, 00:52:50.872 { 00:52:50.872 "name": "prep_upgrade_on_shutdown", 00:52:50.872 "value": false, 00:52:50.872 "unit": "", 00:52:50.872 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:52:50.872 } 00:52:50.872 ] 00:52:50.872 } 00:52:50.872 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:52:50.872 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:50.872 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:52:51.132 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:52:51.132 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:52:51.132 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:52:51.132 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:51.132 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:52:51.391 Validate MD5 checksum, iteration 1 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:51.391 17:49:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:51.391 [2024-11-26 17:49:28.738887] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:51.391 [2024-11-26 17:49:28.739146] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84508 ] 00:52:51.650 [2024-11-26 17:49:28.905846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:51.650 [2024-11-26 17:49:29.093278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:53.572  [2024-11-26T17:49:31.955Z] Copying: 521/1024 [MB] (521 MBps) [2024-11-26T17:49:31.955Z] Copying: 1006/1024 [MB] (485 MBps) [2024-11-26T17:49:33.861Z] Copying: 1024/1024 [MB] (average 504 MBps) 00:52:56.415 00:52:56.415 17:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:52:56.416 17:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ec22201c9ba7ffd6ef7b5ad822a9c9b5 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ec22201c9ba7ffd6ef7b5ad822a9c9b5 != \e\c\2\2\2\0\1\c\9\b\a\7\f\f\d\6\e\f\7\b\5\a\d\8\2\2\a\9\c\9\b\5 ]] 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:52:58.345 Validate MD5 checksum, iteration 2 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:58.345 17:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:58.345 [2024-11-26 17:49:35.781006] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:52:58.345 [2024-11-26 17:49:35.781359] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84585 ] 00:52:58.605 [2024-11-26 17:49:35.973308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:58.863 [2024-11-26 17:49:36.132734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:00.772  [2024-11-26T17:49:38.787Z] Copying: 538/1024 [MB] (538 MBps) [2024-11-26T17:49:40.689Z] Copying: 1024/1024 [MB] (average 554 MBps) 00:53:03.243 00:53:03.243 17:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:53:03.243 17:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a69825ade26b33cd01f7a0e013f82819 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a69825ade26b33cd01f7a0e013f82819 != \a\6\9\8\2\5\a\d\e\2\6\b\3\3\c\d\0\1\f\7\a\0\e\0\1\3\f\8\2\8\1\9 ]] 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84430 ]] 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84430 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:53:05.150 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84658 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84658 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84658 ']' 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:05.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:05.151 17:49:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:05.151 [2024-11-26 17:49:42.544857] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:53:05.151 [2024-11-26 17:49:42.545140] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84658 ] 00:53:05.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84430 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:53:05.410 [2024-11-26 17:49:42.732177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:05.669 [2024-11-26 17:49:42.888398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:07.059 [2024-11-26 17:49:44.130318] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:07.060 [2024-11-26 17:49:44.130553] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:07.060 [2024-11-26 17:49:44.278784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.278865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:53:07.060 [2024-11-26 17:49:44.278882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:07.060 [2024-11-26 17:49:44.278891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.278982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.279006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:07.060 [2024-11-26 17:49:44.279015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:53:07.060 [2024-11-26 17:49:44.279022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.279047] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:53:07.060 [2024-11-26 17:49:44.280119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:53:07.060 [2024-11-26 17:49:44.280143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.280152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:07.060 [2024-11-26 17:49:44.280163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.103 ms 00:53:07.060 [2024-11-26 17:49:44.280172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.280535] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:53:07.060 [2024-11-26 17:49:44.308467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.308541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:53:07.060 [2024-11-26 17:49:44.308558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.983 ms 00:53:07.060 [2024-11-26 17:49:44.308568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.324569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.324654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:53:07.060 [2024-11-26 17:49:44.324670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:53:07.060 [2024-11-26 17:49:44.324678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.325107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.325122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:07.060 [2024-11-26 17:49:44.325131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:53:07.060 [2024-11-26 17:49:44.325140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.325212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.325225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:07.060 [2024-11-26 17:49:44.325235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:53:07.060 [2024-11-26 17:49:44.325243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.325282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.325291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:53:07.060 [2024-11-26 17:49:44.325299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:53:07.060 [2024-11-26 17:49:44.325307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.325338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:53:07.060 [2024-11-26 17:49:44.330874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.330938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:07.060 [2024-11-26 17:49:44.330951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.556 ms 00:53:07.060 [2024-11-26 17:49:44.330964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.331007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.331016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:53:07.060 [2024-11-26 17:49:44.331026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:53:07.060 [2024-11-26 17:49:44.331034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.331089] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:53:07.060 [2024-11-26 17:49:44.331116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:53:07.060 [2024-11-26 17:49:44.331152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:53:07.060 [2024-11-26 17:49:44.331171] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:53:07.060 [2024-11-26 17:49:44.331277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:53:07.060 [2024-11-26 17:49:44.331289] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:53:07.060 [2024-11-26 17:49:44.331300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:53:07.060 [2024-11-26 17:49:44.331310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331328] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:53:07.060 [2024-11-26 17:49:44.331337] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:53:07.060 [2024-11-26 17:49:44.331344] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:53:07.060 [2024-11-26 17:49:44.331352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:53:07.060 [2024-11-26 17:49:44.331364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.331373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:53:07.060 [2024-11-26 17:49:44.331381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:53:07.060 [2024-11-26 17:49:44.331390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.331466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.060 [2024-11-26 17:49:44.331475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:53:07.060 [2024-11-26 17:49:44.331483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:53:07.060 [2024-11-26 17:49:44.331491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.060 [2024-11-26 17:49:44.331590] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:53:07.060 [2024-11-26 17:49:44.331604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:53:07.060 [2024-11-26 17:49:44.331628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:53:07.060 [2024-11-26 17:49:44.331652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:53:07.060 [2024-11-26 17:49:44.331666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:53:07.060 [2024-11-26 17:49:44.331674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:53:07.060 [2024-11-26 17:49:44.331682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:53:07.060 [2024-11-26 17:49:44.331698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:53:07.060 [2024-11-26 17:49:44.331704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:53:07.060 [2024-11-26 17:49:44.331718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:53:07.060 [2024-11-26 17:49:44.331726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:53:07.060 [2024-11-26 17:49:44.331739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:53:07.060 [2024-11-26 17:49:44.331746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:53:07.060 [2024-11-26 17:49:44.331760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:53:07.060 [2024-11-26 17:49:44.331783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:53:07.060 [2024-11-26 17:49:44.331799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:53:07.060 [2024-11-26 17:49:44.331806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:53:07.060 [2024-11-26 17:49:44.331821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:53:07.060 [2024-11-26 17:49:44.331828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:53:07.060 [2024-11-26 17:49:44.331842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:53:07.060 [2024-11-26 17:49:44.331848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:53:07.060 [2024-11-26 17:49:44.331863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:53:07.060 [2024-11-26 17:49:44.331870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:53:07.060 [2024-11-26 17:49:44.331884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:53:07.060 [2024-11-26 17:49:44.331890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:53:07.060 [2024-11-26 17:49:44.331904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:53:07.060 [2024-11-26 17:49:44.331911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.061 [2024-11-26 17:49:44.331918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:53:07.061 [2024-11-26 17:49:44.331925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:53:07.061 [2024-11-26 17:49:44.331932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.061 [2024-11-26 17:49:44.331939] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:53:07.061 [2024-11-26 17:49:44.331948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:53:07.061 [2024-11-26 17:49:44.331956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:07.061 [2024-11-26 17:49:44.331965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:07.061 [2024-11-26 17:49:44.331982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:53:07.061 [2024-11-26 17:49:44.331989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:53:07.061 [2024-11-26 17:49:44.331997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:53:07.061 [2024-11-26 17:49:44.332005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:53:07.061 [2024-11-26 17:49:44.332012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:53:07.061 [2024-11-26 17:49:44.332019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:53:07.061 [2024-11-26 17:49:44.332028] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:53:07.061 [2024-11-26 17:49:44.332038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:53:07.061 [2024-11-26 17:49:44.332054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:53:07.061 [2024-11-26 17:49:44.332082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:53:07.061 [2024-11-26 17:49:44.332090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:53:07.061 [2024-11-26 17:49:44.332099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:53:07.061 [2024-11-26 17:49:44.332106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:53:07.061 [2024-11-26 17:49:44.332160] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:53:07.061 [2024-11-26 17:49:44.332168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:53:07.061 [2024-11-26 17:49:44.332188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:53:07.061 [2024-11-26 17:49:44.332195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:53:07.061 [2024-11-26 17:49:44.332203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:53:07.061 [2024-11-26 17:49:44.332213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.332221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:53:07.061 [2024-11-26 17:49:44.332230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.680 ms 00:53:07.061 [2024-11-26 17:49:44.332237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.378325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.378512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:07.061 [2024-11-26 17:49:44.378533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.107 ms 00:53:07.061 [2024-11-26 17:49:44.378543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.378638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.378650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:53:07.061 [2024-11-26 17:49:44.378660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:53:07.061 [2024-11-26 17:49:44.378668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.431573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.431795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:07.061 [2024-11-26 17:49:44.431817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.879 ms 00:53:07.061 [2024-11-26 17:49:44.431826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.431919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.431929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:07.061 [2024-11-26 17:49:44.431944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:07.061 [2024-11-26 17:49:44.431952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.432099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.432112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:07.061 [2024-11-26 17:49:44.432122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:53:07.061 [2024-11-26 17:49:44.432130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.432176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.432186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:07.061 [2024-11-26 17:49:44.432195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:53:07.061 [2024-11-26 17:49:44.432208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.458706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.458882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:07.061 [2024-11-26 17:49:44.458909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.524 ms 00:53:07.061 [2024-11-26 17:49:44.458918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.459128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.459145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:53:07.061 [2024-11-26 17:49:44.459154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:53:07.061 [2024-11-26 17:49:44.459163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.061 [2024-11-26 17:49:44.501720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.061 [2024-11-26 17:49:44.501808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:53:07.061 [2024-11-26 17:49:44.501826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.611 ms 00:53:07.061 [2024-11-26 17:49:44.501836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.519898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.520107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:53:07.321 [2024-11-26 17:49:44.520142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.686 ms 00:53:07.321 [2024-11-26 17:49:44.520151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.625196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.625285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:53:07.321 [2024-11-26 17:49:44.625303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.113 ms 00:53:07.321 [2024-11-26 17:49:44.625313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.625604] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:53:07.321 [2024-11-26 17:49:44.625811] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:53:07.321 [2024-11-26 17:49:44.625992] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:53:07.321 [2024-11-26 17:49:44.626170] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:53:07.321 [2024-11-26 17:49:44.626187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.626197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:53:07.321 [2024-11-26 17:49:44.626207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.773 ms 00:53:07.321 [2024-11-26 17:49:44.626216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.626353] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:53:07.321 [2024-11-26 17:49:44.626371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.626380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:53:07.321 [2024-11-26 17:49:44.626389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:53:07.321 [2024-11-26 17:49:44.626397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.652594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.652670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:53:07.321 [2024-11-26 17:49:44.652686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.222 ms 00:53:07.321 [2024-11-26 17:49:44.652695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.668080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.668138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:53:07.321 [2024-11-26 17:49:44.668151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:53:07.321 [2024-11-26 17:49:44.668159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.321 [2024-11-26 17:49:44.668295] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:53:07.321 [2024-11-26 17:49:44.668655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.321 [2024-11-26 17:49:44.668683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:53:07.321 [2024-11-26 17:49:44.668694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:53:07.321 [2024-11-26 17:49:44.668702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.892 [2024-11-26 17:49:45.257159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.892 [2024-11-26 17:49:45.257248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:53:07.892 [2024-11-26 17:49:45.257284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 587.982 ms 00:53:07.892 [2024-11-26 17:49:45.257294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.892 [2024-11-26 17:49:45.263605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.892 [2024-11-26 17:49:45.263661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:53:07.892 [2024-11-26 17:49:45.263673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.359 ms 00:53:07.892 [2024-11-26 17:49:45.263689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.892 [2024-11-26 17:49:45.264144] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:53:07.892 [2024-11-26 17:49:45.264172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.892 [2024-11-26 17:49:45.264182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:53:07.892 [2024-11-26 17:49:45.264193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:53:07.892 [2024-11-26 17:49:45.264202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.892 [2024-11-26 17:49:45.264235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.892 [2024-11-26 17:49:45.264246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:53:07.892 [2024-11-26 17:49:45.264262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:07.892 [2024-11-26 17:49:45.264271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:07.892 [2024-11-26 17:49:45.264310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 597.163 ms, result 0 00:53:07.892 [2024-11-26 17:49:45.264358] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:53:07.892 [2024-11-26 17:49:45.264476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:07.892 [2024-11-26 17:49:45.264493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:53:07.892 [2024-11-26 17:49:45.264502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.120 ms 00:53:07.892 [2024-11-26 17:49:45.264510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.477 [2024-11-26 17:49:45.823017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.477 [2024-11-26 17:49:45.823124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:53:08.477 [2024-11-26 17:49:45.823178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 558.161 ms 00:53:08.477 [2024-11-26 17:49:45.823188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.477 [2024-11-26 17:49:45.830118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.477 [2024-11-26 17:49:45.830303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:53:08.477 [2024-11-26 17:49:45.830323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.183 ms 00:53:08.477 [2024-11-26 17:49:45.830333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.477 [2024-11-26 17:49:45.830816] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:53:08.477 [2024-11-26 17:49:45.830847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.477 [2024-11-26 17:49:45.830857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:53:08.478 [2024-11-26 17:49:45.830869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:53:08.478 [2024-11-26 17:49:45.830878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.478 [2024-11-26 17:49:45.830915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.478 [2024-11-26 17:49:45.830927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:53:08.478 [2024-11-26 17:49:45.830937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:08.478 [2024-11-26 17:49:45.830946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.478 [2024-11-26 17:49:45.830995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 567.722 ms, result 0 00:53:08.478 [2024-11-26 17:49:45.831050] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:53:08.478 [2024-11-26 17:49:45.831063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:53:08.478 [2024-11-26 17:49:45.831075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.478 [2024-11-26 17:49:45.831086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:53:08.478 [2024-11-26 17:49:45.831096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1165.052 ms 00:53:08.478 [2024-11-26 17:49:45.831106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.478 [2024-11-26 17:49:45.831151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.478 [2024-11-26 17:49:45.831163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:53:08.478 [2024-11-26 17:49:45.831173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:08.478 [2024-11-26 17:49:45.831183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.478 [2024-11-26 17:49:45.849303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:53:08.478 [2024-11-26 17:49:45.849709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.478 [2024-11-26 17:49:45.849730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:53:08.479 [2024-11-26 17:49:45.849746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.539 ms 00:53:08.479 [2024-11-26 17:49:45.849754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.479 [2024-11-26 17:49:45.850555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.479 [2024-11-26 17:49:45.850595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:53:08.479 [2024-11-26 17:49:45.850625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:53:08.479 [2024-11-26 17:49:45.850635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.479 [2024-11-26 17:49:45.852984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.479 [2024-11-26 17:49:45.853013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:53:08.479 [2024-11-26 17:49:45.853024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.319 ms 00:53:08.479 [2024-11-26 17:49:45.853033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.479 [2024-11-26 17:49:45.853126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.479 [2024-11-26 17:49:45.853144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:53:08.479 [2024-11-26 17:49:45.853167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:53:08.479 [2024-11-26 17:49:45.853176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.479 [2024-11-26 17:49:45.853325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.479 [2024-11-26 17:49:45.853338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:53:08.479 [2024-11-26 17:49:45.853348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:53:08.479 [2024-11-26 17:49:45.853358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.479 [2024-11-26 17:49:45.853389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.479 [2024-11-26 17:49:45.853400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:53:08.479 [2024-11-26 17:49:45.853409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:53:08.479 [2024-11-26 17:49:45.853423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.479 [2024-11-26 17:49:45.853461] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:53:08.479 [2024-11-26 17:49:45.853473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.479 [2024-11-26 17:49:45.853483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:53:08.479 [2024-11-26 17:49:45.853493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:53:08.480 [2024-11-26 17:49:45.853502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.480 [2024-11-26 17:49:45.853571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:08.480 [2024-11-26 17:49:45.853583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:53:08.480 [2024-11-26 17:49:45.853638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:53:08.480 [2024-11-26 17:49:45.853654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:08.480 [2024-11-26 17:49:45.855188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1578.807 ms, result 0 00:53:08.480 [2024-11-26 17:49:45.870284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:08.480 [2024-11-26 17:49:45.886346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:53:08.480 [2024-11-26 17:49:45.898258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:53:08.746 Validate MD5 checksum, iteration 1 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:08.746 17:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:08.746 [2024-11-26 17:49:46.052150] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:53:08.746 [2024-11-26 17:49:46.052434] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84698 ] 00:53:09.007 [2024-11-26 17:49:46.236879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:09.007 [2024-11-26 17:49:46.367288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:10.914  [2024-11-26T17:49:48.928Z] Copying: 552/1024 [MB] (552 MBps) [2024-11-26T17:49:53.122Z] Copying: 1024/1024 [MB] (average 544 MBps) 00:53:15.676 00:53:15.676 17:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:53:15.676 17:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ec22201c9ba7ffd6ef7b5ad822a9c9b5 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ec22201c9ba7ffd6ef7b5ad822a9c9b5 != \e\c\2\2\2\0\1\c\9\b\a\7\f\f\d\6\e\f\7\b\5\a\d\8\2\2\a\9\c\9\b\5 ]] 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:17.650 Validate MD5 checksum, iteration 2 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:17.650 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:17.651 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:17.651 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:17.651 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:17.651 17:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:17.651 [2024-11-26 17:49:54.818092] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:53:17.651 [2024-11-26 17:49:54.818326] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84787 ] 00:53:17.651 [2024-11-26 17:49:54.992560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:17.909 [2024-11-26 17:49:55.133526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:19.816  [2024-11-26T17:49:57.830Z] Copying: 598/1024 [MB] (598 MBps) [2024-11-26T17:50:00.372Z] Copying: 1024/1024 [MB] (average 592 MBps) 00:53:22.926 00:53:22.926 17:50:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:53:22.926 17:50:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a69825ade26b33cd01f7a0e013f82819 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a69825ade26b33cd01f7a0e013f82819 != \a\6\9\8\2\5\a\d\e\2\6\b\3\3\c\d\0\1\f\7\a\0\e\0\1\3\f\8\2\8\1\9 ]] 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84658 ]] 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84658 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84658 ']' 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84658 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84658 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84658' 00:53:24.853 killing process with pid 84658 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84658 00:53:24.853 17:50:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84658 00:53:26.236 [2024-11-26 17:50:03.443713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:53:26.236 [2024-11-26 17:50:03.463055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.463175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:53:26.236 [2024-11-26 17:50:03.463210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:26.236 [2024-11-26 17:50:03.463232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.463270] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:53:26.236 [2024-11-26 17:50:03.467722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.467801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:53:26.236 [2024-11-26 17:50:03.467830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.422 ms 00:53:26.236 [2024-11-26 17:50:03.467851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.468115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.468158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:53:26.236 [2024-11-26 17:50:03.468190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:53:26.236 [2024-11-26 17:50:03.468222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.469569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.469657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:53:26.236 [2024-11-26 17:50:03.469721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.315 ms 00:53:26.236 [2024-11-26 17:50:03.469746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.470870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.470928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:53:26.236 [2024-11-26 17:50:03.470965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:53:26.236 [2024-11-26 17:50:03.470989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.487436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.487522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:53:26.236 [2024-11-26 17:50:03.487559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.406 ms 00:53:26.236 [2024-11-26 17:50:03.487581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.496391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.496482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:53:26.236 [2024-11-26 17:50:03.496513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.753 ms 00:53:26.236 [2024-11-26 17:50:03.496534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.496658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.496715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:53:26.236 [2024-11-26 17:50:03.496751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:53:26.236 [2024-11-26 17:50:03.496783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.512635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.512709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:53:26.236 [2024-11-26 17:50:03.512738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.850 ms 00:53:26.236 [2024-11-26 17:50:03.512759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.528003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.528076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:53:26.236 [2024-11-26 17:50:03.528104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.224 ms 00:53:26.236 [2024-11-26 17:50:03.528124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.544830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.544913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:53:26.236 [2024-11-26 17:50:03.544944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.690 ms 00:53:26.236 [2024-11-26 17:50:03.544967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.561761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.236 [2024-11-26 17:50:03.561865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:53:26.236 [2024-11-26 17:50:03.561898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.711 ms 00:53:26.236 [2024-11-26 17:50:03.561922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.236 [2024-11-26 17:50:03.561979] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:53:26.236 [2024-11-26 17:50:03.562015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:53:26.237 [2024-11-26 17:50:03.562069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:53:26.237 [2024-11-26 17:50:03.562080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:53:26.237 [2024-11-26 17:50:03.562090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:53:26.237 [2024-11-26 17:50:03.562228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:53:26.237 [2024-11-26 17:50:03.562238] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b8c69390-3108-4515-9c8f-ad8b4fa0e6bd 00:53:26.237 [2024-11-26 17:50:03.562247] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:53:26.237 [2024-11-26 17:50:03.562257] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:53:26.237 [2024-11-26 17:50:03.562266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:53:26.237 [2024-11-26 17:50:03.562275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:53:26.237 [2024-11-26 17:50:03.562290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:53:26.237 [2024-11-26 17:50:03.562306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:53:26.237 [2024-11-26 17:50:03.562315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:53:26.237 [2024-11-26 17:50:03.562324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:53:26.237 [2024-11-26 17:50:03.562332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:53:26.237 [2024-11-26 17:50:03.562341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.237 [2024-11-26 17:50:03.562351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:53:26.237 [2024-11-26 17:50:03.562361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:53:26.237 [2024-11-26 17:50:03.562369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.237 [2024-11-26 17:50:03.584467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.237 [2024-11-26 17:50:03.584514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:53:26.237 [2024-11-26 17:50:03.584526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.104 ms 00:53:26.237 [2024-11-26 17:50:03.584542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.237 [2024-11-26 17:50:03.585161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.237 [2024-11-26 17:50:03.585182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:53:26.237 [2024-11-26 17:50:03.585192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:53:26.237 [2024-11-26 17:50:03.585200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.237 [2024-11-26 17:50:03.656110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.237 [2024-11-26 17:50:03.656178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:26.237 [2024-11-26 17:50:03.656192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.237 [2024-11-26 17:50:03.656208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.237 [2024-11-26 17:50:03.656265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.237 [2024-11-26 17:50:03.656276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:26.237 [2024-11-26 17:50:03.656285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.237 [2024-11-26 17:50:03.656293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.237 [2024-11-26 17:50:03.656415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.237 [2024-11-26 17:50:03.656429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:26.237 [2024-11-26 17:50:03.656438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.237 [2024-11-26 17:50:03.656447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.237 [2024-11-26 17:50:03.656472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.237 [2024-11-26 17:50:03.656491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:26.237 [2024-11-26 17:50:03.656499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.237 [2024-11-26 17:50:03.656506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.795499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.795588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:26.497 [2024-11-26 17:50:03.795604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.795632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.904381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.904448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:26.497 [2024-11-26 17:50:03.904461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.904469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.904585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.904598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:26.497 [2024-11-26 17:50:03.904622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.904631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.904684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.904704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:26.497 [2024-11-26 17:50:03.904713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.904724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.904836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.904849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:26.497 [2024-11-26 17:50:03.904858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.904866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.904901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.904916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:53:26.497 [2024-11-26 17:50:03.904925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.904932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.904974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.904983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:26.497 [2024-11-26 17:50:03.904991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.904998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.905063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:26.497 [2024-11-26 17:50:03.905075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:26.497 [2024-11-26 17:50:03.905083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:26.497 [2024-11-26 17:50:03.905092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.497 [2024-11-26 17:50:03.905223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 442.984 ms, result 0 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:53:28.425 Remove shared memory files 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84430 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:53:28.425 ************************************ 00:53:28.425 END TEST ftl_upgrade_shutdown 00:53:28.425 ************************************ 00:53:28.425 00:53:28.425 real 1m40.706s 00:53:28.425 user 2m16.427s 00:53:28.425 sys 0m26.783s 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:28.425 17:50:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:28.425 Process with pid 77495 is not found 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@14 -- # killprocess 77495 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 77495 ']' 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@958 -- # kill -0 77495 00:53:28.425 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77495) - No such process 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77495 is not found' 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84930 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:28.425 17:50:05 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84930 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 84930 ']' 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:28.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:28.425 17:50:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:53:28.425 [2024-11-26 17:50:05.643230] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:53:28.425 [2024-11-26 17:50:05.643481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84930 ] 00:53:28.425 [2024-11-26 17:50:05.821689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:28.685 [2024-11-26 17:50:05.975563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:29.625 17:50:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:29.625 17:50:07 ftl -- common/autotest_common.sh@868 -- # return 0 00:53:29.625 17:50:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:53:30.228 nvme0n1 00:53:30.228 17:50:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:53:30.228 17:50:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:53:30.228 17:50:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:53:30.228 17:50:07 ftl -- ftl/common.sh@28 -- # stores=f8f7184d-70f9-46f8-a515-e2b802debca2 00:53:30.228 17:50:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:53:30.228 17:50:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8f7184d-70f9-46f8-a515-e2b802debca2 00:53:30.488 17:50:07 ftl -- ftl/ftl.sh@23 -- # killprocess 84930 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 84930 ']' 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@958 -- # kill -0 84930 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@959 -- # uname 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84930 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:30.488 killing process with pid 84930 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84930' 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@973 -- # kill 84930 00:53:30.488 17:50:07 ftl -- common/autotest_common.sh@978 -- # wait 84930 00:53:33.795 17:50:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:53:33.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:33.795 Waiting for block devices as requested 00:53:33.795 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:53:33.795 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:53:33.795 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:53:34.053 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:53:39.330 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:53:39.330 Remove shared memory files 00:53:39.330 17:50:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:53:39.330 17:50:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:53:39.330 17:50:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:53:39.330 17:50:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:53:39.330 17:50:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:53:39.330 17:50:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:53:39.330 17:50:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:53:39.330 00:53:39.330 real 11m11.039s 00:53:39.330 user 14m0.724s 00:53:39.330 sys 1m30.320s 00:53:39.330 17:50:16 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:39.330 ************************************ 00:53:39.330 END TEST ftl 00:53:39.330 ************************************ 00:53:39.330 17:50:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:53:39.330 17:50:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:53:39.330 17:50:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:53:39.330 17:50:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:53:39.330 17:50:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:53:39.330 17:50:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:53:39.330 17:50:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:53:39.330 17:50:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:53:39.330 17:50:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:53:39.330 17:50:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:53:39.330 17:50:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:53:39.330 17:50:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:39.331 17:50:16 -- common/autotest_common.sh@10 -- # set +x 00:53:39.331 17:50:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:53:39.331 17:50:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:53:39.331 17:50:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:53:39.331 17:50:16 -- common/autotest_common.sh@10 -- # set +x 00:53:41.236 INFO: APP EXITING 00:53:41.236 INFO: killing all VMs 00:53:41.236 INFO: killing vhost app 00:53:41.236 INFO: EXIT DONE 00:53:41.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:42.064 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:53:42.064 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:53:42.064 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:53:42.064 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:53:42.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:43.203 Cleaning 00:53:43.203 Removing: /var/run/dpdk/spdk0/config 00:53:43.203 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:53:43.203 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:53:43.203 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:53:43.203 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:53:43.203 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:53:43.203 Removing: /var/run/dpdk/spdk0/hugepage_info 00:53:43.203 Removing: /var/run/dpdk/spdk0 00:53:43.203 Removing: /var/run/dpdk/spdk_pid57786 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58032 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58272 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58376 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58438 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58571 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58595 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58810 00:53:43.203 Removing: /var/run/dpdk/spdk_pid58922 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59040 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59173 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59281 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59326 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59368 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59439 00:53:43.203 Removing: /var/run/dpdk/spdk_pid59556 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60016 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60105 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60179 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60201 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60365 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60387 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60549 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60565 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60640 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60664 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60733 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60757 00:53:43.203 Removing: /var/run/dpdk/spdk_pid60963 00:53:43.203 Removing: /var/run/dpdk/spdk_pid61005 00:53:43.203 Removing: /var/run/dpdk/spdk_pid61088 00:53:43.203 Removing: /var/run/dpdk/spdk_pid61289 00:53:43.203 Removing: /var/run/dpdk/spdk_pid61384 00:53:43.203 Removing: /var/run/dpdk/spdk_pid61437 00:53:43.203 Removing: /var/run/dpdk/spdk_pid61906 00:53:43.203 Removing: /var/run/dpdk/spdk_pid62005 00:53:43.203 Removing: /var/run/dpdk/spdk_pid62136 00:53:43.203 Removing: /var/run/dpdk/spdk_pid62195 00:53:43.203 Removing: /var/run/dpdk/spdk_pid62220 00:53:43.203 Removing: /var/run/dpdk/spdk_pid62304 00:53:43.203 Removing: /var/run/dpdk/spdk_pid62959 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63007 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63518 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63627 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63758 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63821 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63848 00:53:43.203 Removing: /var/run/dpdk/spdk_pid63879 00:53:43.203 Removing: /var/run/dpdk/spdk_pid65792 00:53:43.203 Removing: /var/run/dpdk/spdk_pid65951 00:53:43.203 Removing: /var/run/dpdk/spdk_pid65956 00:53:43.203 Removing: /var/run/dpdk/spdk_pid65975 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66021 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66025 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66037 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66097 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66101 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66117 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66163 00:53:43.203 Removing: /var/run/dpdk/spdk_pid66167 00:53:43.462 Removing: /var/run/dpdk/spdk_pid66180 00:53:43.462 Removing: /var/run/dpdk/spdk_pid67643 00:53:43.462 Removing: /var/run/dpdk/spdk_pid67758 00:53:43.462 Removing: /var/run/dpdk/spdk_pid69182 00:53:43.462 Removing: /var/run/dpdk/spdk_pid70932 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71017 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71098 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71213 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71312 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71413 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71511 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71592 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71702 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71805 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71912 00:53:43.462 Removing: /var/run/dpdk/spdk_pid71998 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72083 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72200 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72297 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72407 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72501 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72583 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72696 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72793 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72891 00:53:43.462 Removing: /var/run/dpdk/spdk_pid72986 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73066 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73146 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73231 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73340 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73441 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73537 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73628 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73712 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73793 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73874 00:53:43.462 Removing: /var/run/dpdk/spdk_pid73983 00:53:43.462 Removing: /var/run/dpdk/spdk_pid74079 00:53:43.462 Removing: /var/run/dpdk/spdk_pid74230 00:53:43.462 Removing: /var/run/dpdk/spdk_pid74525 00:53:43.462 Removing: /var/run/dpdk/spdk_pid74572 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75040 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75224 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75325 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75447 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75503 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75534 00:53:43.462 Removing: /var/run/dpdk/spdk_pid75939 00:53:43.462 Removing: /var/run/dpdk/spdk_pid76011 00:53:43.462 Removing: /var/run/dpdk/spdk_pid76108 00:53:43.462 Removing: /var/run/dpdk/spdk_pid76535 00:53:43.462 Removing: /var/run/dpdk/spdk_pid76688 00:53:43.462 Removing: /var/run/dpdk/spdk_pid77495 00:53:43.462 Removing: /var/run/dpdk/spdk_pid77644 00:53:43.462 Removing: /var/run/dpdk/spdk_pid77857 00:53:43.462 Removing: /var/run/dpdk/spdk_pid77965 00:53:43.462 Removing: /var/run/dpdk/spdk_pid78302 00:53:43.462 Removing: /var/run/dpdk/spdk_pid78561 00:53:43.462 Removing: /var/run/dpdk/spdk_pid78944 00:53:43.462 Removing: /var/run/dpdk/spdk_pid79180 00:53:43.462 Removing: /var/run/dpdk/spdk_pid79295 00:53:43.462 Removing: /var/run/dpdk/spdk_pid79364 00:53:43.462 Removing: /var/run/dpdk/spdk_pid79502 00:53:43.462 Removing: /var/run/dpdk/spdk_pid79537 00:53:43.463 Removing: /var/run/dpdk/spdk_pid79608 00:53:43.721 Removing: /var/run/dpdk/spdk_pid79808 00:53:43.721 Removing: /var/run/dpdk/spdk_pid80060 00:53:43.721 Removing: /var/run/dpdk/spdk_pid80446 00:53:43.721 Removing: /var/run/dpdk/spdk_pid80838 00:53:43.721 Removing: /var/run/dpdk/spdk_pid81247 00:53:43.721 Removing: /var/run/dpdk/spdk_pid81697 00:53:43.721 Removing: /var/run/dpdk/spdk_pid81845 00:53:43.721 Removing: /var/run/dpdk/spdk_pid81934 00:53:43.721 Removing: /var/run/dpdk/spdk_pid82476 00:53:43.721 Removing: /var/run/dpdk/spdk_pid82541 00:53:43.721 Removing: /var/run/dpdk/spdk_pid82978 00:53:43.721 Removing: /var/run/dpdk/spdk_pid83350 00:53:43.721 Removing: /var/run/dpdk/spdk_pid83812 00:53:43.721 Removing: /var/run/dpdk/spdk_pid83940 00:53:43.721 Removing: /var/run/dpdk/spdk_pid83998 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84068 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84136 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84200 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84430 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84508 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84585 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84658 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84698 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84787 00:53:43.721 Removing: /var/run/dpdk/spdk_pid84930 00:53:43.721 Clean 00:53:43.721 17:50:21 -- common/autotest_common.sh@1453 -- # return 0 00:53:43.721 17:50:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:53:43.721 17:50:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:43.721 17:50:21 -- common/autotest_common.sh@10 -- # set +x 00:53:43.721 17:50:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:53:43.721 17:50:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:43.721 17:50:21 -- common/autotest_common.sh@10 -- # set +x 00:53:43.981 17:50:21 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:43.981 17:50:21 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:53:43.981 17:50:21 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:53:43.981 17:50:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:53:43.981 17:50:21 -- spdk/autotest.sh@398 -- # hostname 00:53:43.981 17:50:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:53:43.981 geninfo: WARNING: invalid characters removed from testname! 00:54:10.539 17:50:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:13.838 17:50:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:16.444 17:50:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:18.997 17:50:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:21.532 17:50:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:24.063 17:51:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:26.608 17:51:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:54:26.608 17:51:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:54:26.608 17:51:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:54:26.608 17:51:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:54:26.608 17:51:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:54:26.608 17:51:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:54:26.608 + [[ -n 5461 ]] 00:54:26.608 + sudo kill 5461 00:54:26.617 [Pipeline] } 00:54:26.634 [Pipeline] // timeout 00:54:26.639 [Pipeline] } 00:54:26.654 [Pipeline] // stage 00:54:26.659 [Pipeline] } 00:54:26.675 [Pipeline] // catchError 00:54:26.685 [Pipeline] stage 00:54:26.687 [Pipeline] { (Stop VM) 00:54:26.700 [Pipeline] sh 00:54:26.981 + vagrant halt 00:54:30.268 ==> default: Halting domain... 00:54:38.444 [Pipeline] sh 00:54:38.724 + vagrant destroy -f 00:54:41.289 ==> default: Removing domain... 00:54:41.870 [Pipeline] sh 00:54:42.153 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:54:42.163 [Pipeline] } 00:54:42.177 [Pipeline] // stage 00:54:42.182 [Pipeline] } 00:54:42.195 [Pipeline] // dir 00:54:42.201 [Pipeline] } 00:54:42.217 [Pipeline] // wrap 00:54:42.223 [Pipeline] } 00:54:42.237 [Pipeline] // catchError 00:54:42.250 [Pipeline] stage 00:54:42.253 [Pipeline] { (Epilogue) 00:54:42.268 [Pipeline] sh 00:54:42.552 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:54:49.247 [Pipeline] catchError 00:54:49.249 [Pipeline] { 00:54:49.264 [Pipeline] sh 00:54:49.549 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:54:49.549 Artifacts sizes are good 00:54:49.560 [Pipeline] } 00:54:49.575 [Pipeline] // catchError 00:54:49.587 [Pipeline] archiveArtifacts 00:54:49.595 Archiving artifacts 00:54:49.710 [Pipeline] cleanWs 00:54:49.722 [WS-CLEANUP] Deleting project workspace... 00:54:49.722 [WS-CLEANUP] Deferred wipeout is used... 00:54:49.728 [WS-CLEANUP] done 00:54:49.729 [Pipeline] } 00:54:49.748 [Pipeline] // stage 00:54:49.753 [Pipeline] } 00:54:49.767 [Pipeline] // node 00:54:49.772 [Pipeline] End of Pipeline 00:54:49.813 Finished: SUCCESS